REF 2029: volume measures to avoid ‘census cliff edge’ confirmed

Details provide more clarity on which research staff will need to enter the next Research Excellence Framework

January 16, 2025
Measuring test tubes with different liquid in laboratory
Source: iStock/Sergey Ryzhov

Additional guidance has been published on the Research Excellence Framework’s new approach to calculating how many outputs and impact studies must be submitted by universities.

For the first time in the exercise’s history, universities participating in the 2029 evaluation will no longer submit lists of staff whose research will be entered for each unit of assessment (UoA), with “volume” instead determined by the number of research staff employed over a two-year period, as reported to the sector’s main data body, the Higher Education Statistics Agency (Hesa).

For every full-time equivalent researcher, institutions will need to submit an average of 2.5 outputs for a unit of assessment, though not every researcher will need to submit outputs. Previously, all eligible staff had to submit at least one output published in the REF period, though exceptions in certain circumstances could be made. The volume measure also determines how many impact statements are required for each UoA.

The change follows concerns that the previous approach incentivised universities to recruit star researchers just before the “REF census day” rather than providing an accurate picture of institutions' existing strengths.

ADVERTISEMENT

Confirming how the new system, which aims to “avoid a REF census data cliff edge”, will work, new guidelines published on 16 January explain how Hesa will collect data based on a member of staff’s contract when calculating the volume measure.

Academic staff on either teaching and research contracts, or research-only contracts, who are on the payroll in 2025-26 and 2026-27, will be submitted to Hesa for REF inclusion, the guidance explains.

ADVERTISEMENT

Further detail on what constitutes “significant responsibility for research” explains how these contracts must make available “explicit time and resources” for research – either in workload models or time allocations – that researchers must “engage actively in independent research” and that research “is an expectation of the job role”.

Teaching-only staff must not be submitted to Hesa, while being named as an author on a research output is not sufficient to be entered.

In some “exceptional circumstances”, research assistants who are “primarily employed to support another individual’s research rather than pursuing independent research” can be submitted if they are on a research-only contract, says guidance, which also includes a new code of practice.


Research excellence: what is it and how can universities achieve it?


Institutions have previously been asked to collect data for 2024-25, which will be used as a pilot year for the REF, though this data will not be used for REF 2029. Data for 2027-28 will not be used for the REF as it will not be ready in time for the REF submission deadline at the end of 2028, with results due to be published at the end of 2029.

ADVERTISEMENT

Rebecca Fairbairn, the REF’s director, said the new volume measures would support “efforts to break the link between outputs and individuals”.

“The move to using Hesa data for the REF will help long-term data collection that allows the sector to explore the changing shape of UK research capacity,” she said.

“We are committed to ensuring REF 2029 is inclusive of all research-related staff and encourages engagement with practitioners and those with non-academic expertise.”

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (6)

"The change follows concerns that the previous approach incentivised universities to recruit star researchers just before the “REF census day” - That horse has already bolted. The engagement must be material and significant and over a full REF cycle. Where academics primarily based in foreign universities are engaged, it must be separately disclosed and evaluated to ensure fairness. The costs of game playing are nill to those that stand to benefit from it - the salaries to those that are engaged purely for the REF is borne by the universities and ultimately by the taxpayer, but benefits in the form of promotions and bonuses for good REF performance will accrue to those individuals managing the process. This once in seven or eight year exercise has created an undesirable market for internal REF consultants, internal REF reviewers etc. The REF needs to move to an annual process, with sensible use of metrics and other information that can easily be generated from the regular information and reporting systems in universities. If even corporations with the most complex operations can produce annual reports for audit every single year, why cannot universities do this? In the age of AI, why is this not possible?
I agree with @acerpacer - REF has so many academics involved in it that benefit financially and career-wise, it is a lost cause to tweak its edges. A major revamp is required.
Yes I agree, but then some disciplines will argue for the importance of the 'sacred cow' of peer review and the necessity of a 'holistic' judgment, adding yet another onerous and unwieldy level of 'peer review' to that which has already taken place for the research to have been published in the first place. This entails armies of external and internal reviewers (at all levels) and bloated panels of so-called 'experts', hijacking this by now sclerotic process for their own career advancement and progression. When the REF is over, these experts then engage in well-paid institutional reviews thus further enriching themselves while redundancies are the order of the day for other academic staff not so well-connected to serve the academic community so selflessly. Too many of them have too much invested in this absurd and inefficient system, which everyone is cynical about, for it to be rationalized and reformed. This seems an area where AI and metrics could be deployed to the benefit of all.
Yes I agree, but then some disciplines will argue for the importance of the 'sacred cow' of peer review and the necessity of a 'holistic' judgment, adding yet another onerous and unwieldy level of 'peer review' to that which has already taken place for the research to have been published in the first place. This entails armies of external and internal reviewers (at all levels) and bloated panels of so-called 'experts', hijacking this by now sclerotic process for their own career advancement and progression. When the REF is over, these experts then engage in well-paid institutional reviews thus further enriching themselves while redundancies are the order of the day for other academic staff not so well-connected to serve the academic community so selflessly. Too many of them have too much invested in this absurd and inefficient system, which everyone is cynical about, for it to be rationalized and reformed. This seems an area where AI and metrics could be deployed to the benefit of all.
REF has become a load of bloated, box ticking nonsense. Academics abroad laugh at this. World-leading in bureaucratic bloat.
Perhaps the question should be if REF is necessary at all. From where I am sitting, it creates unmeasured competition within and across institutions instead of a healthy research culture. Being assessed by your direct competitors is also a dubious approach (is it even legal, if we were to interpret it, say, through procurement law?). Furthermore, the emphasis on research as the key marker of value and promotion leaves many academics unwilling to teach when this should also be valued as one of the key benefits of research excellence, one that links research directly to students' learning and experience. The quantity of publications has led many to reproduce the same ideas over and over again, without necessarily being mindful of quality. This has put intensive pressure on Journal editors while massively increasing the profits of privately owned Journals. Lastly, the emphasis on impact has instrumentalised research and researchers to do the work that used to be attributed to others, including practitioners and policymakers. Instead of promoting a culture of engagement between the different parties, researchers are left to make all the connections in order to translate their research into practical outcomes and policy, which they are then asked to evidence. It is a very one-sided approach and one that is increasingly leading to burnout.

Sponsored

ADVERTISEMENT