Hefce's Hobson's choice

The REF will leave the funding council with many of the same problems that plagued the RAE, argues Ian Marshall

February 28, 2008

Now that the research excellence framework consultation has closed, the Higher Education Funding Council for England has to decide what to do with a mountain of feedback. I actually feel sorry for Hefce, forced into trying to deliver a metrics-based solution announced by political proclamation.

We all know that the reason the former First Minister of the Treasury became involved was constant whingeing from the sector about the cost and effort required. We all complain publicly, but secretly or not so secretly we like the benchmarking, the opportunity to collect and clean data, the funding (if it appears) and the power the RAE gives to manage research.

Similarly, politicians and the Treasury send out mixed messages. They want both world-class and applied research, but at a much lower cost. This leaves Hefce trying to satisfy everyone.

Looking at consultation responses in my inbox, most of the angst appears to be directed at proposals for the metrics-based science, technology, engineering and medicine system. This ranges from disciplines described as STEM but that "want out", to disciplines that are non-STEM but that "want in". The term "STEM envy" was coined at the first London consultation to describe the view that if you were not metrics based then you were perceived as being of less value.

ADVERTISEMENT

The arguments against publication metrics are now well rehearsed: the use of a commercial service; the US focus of the Web of Science; the lack of coverage in certain disciplines; exclusion of key publication types; differential coverage in different disciplines that will be judged in the same super-unit; the perverse effect of excluding self-citation; and doubts about the validity of using citation analysis as a proxy measure for research publication quality.

The role of the STEM expert panels is also raising questions. Do they just set up the metrics and let them run, or do they validate the outcome? While the latter sounds attractive, it would be difficult to justify a panel's decision to change an outcome because it had not produced the result it expected. The next stage could be the courts.

ADVERTISEMENT

Practically everyone has done some investigation to see how much effort is required to validate the information in the Web of Science. It appears that it is not insignificant, with examples given of missing publications and false positives. Cleaning up this information will require considerable effort at first and then an annual validation process. Running different systems for STEM and non-STEM subjects will significantly increase costs.

While they are not well defined in the REF documentation, there are concerns about the non-STEM light-touch peer review "informed" by metrics. Most would argue that there is no such thing as light touch. Evidence from Quality Assurance Agency audits, subject reviews and professional body accreditations suggests that any time we start with a light-touch system or move to such a system universities don't reduce the amount of effort they put in to get a "good" result. Most of the complexity of the RAE has built up over time. A quick review of the 1996 and 2001 documentation shows the earlier exercises were much simpler affairs. The RAE became more complex as the financial and reputational stakes became higher and the likelihood that someone would sue the funding councils required them to become more consistent, objective and better documented.

The final issue for most people is timing, especially for the STEM scheme. As it stands, universities can just about resource the RAE scheme every five to seven years. While we were all expecting to undertake some form of exercise in about 2013, delivering a new STEM metrics-based scheme before then with no national tools or additional resources in place is an optimistic goal.

My guess is that the consultation will leave Hefce with the same problem that has occurred every time we tried to change the RAE. It is hard to get consensus about what we should use to measure research quality, especially when it has a direct link to funding. You can use a basket of metrics that appear to be "easy" to measure, such as citation analysis, but only if the community agrees it is a good proxy for research quality. There is no point in replacing peer review, with all its faults, with something that generates more work.

ADVERTISEMENT

While it would not be an enjoyable meeting with the Prime Minister, maybe the funding council should suggest continuing the debate and doing more work on the systems and metrics required to capture and validate them against 2008 and 2013 data.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT