A leader of a review of how metrics might be used in the UK’s next Research Excellence Framework has insisted he is open to recommending their more extensive use in the right circumstances.
Seven years after he was asked to lead a major inquiry into the use of metrics in research assessment and management, James Wilsdon, Digital Science professor of research policy at the University of Sheffield, will once again undertake a review of the same issue.
The “short, sharp, evidence-informed look” at the current and potential uses of metrics, which was commissioned by Research England shortly after the results of the 2021 REF were released, will report back in mid-September.
Professor Wilsdon’s 2015 report, The Metric Tide, concluded that it was “not currently feasible to assess the quality of research outputs using quantitative indicators alone”, and the study is regarded by some as an important work in efforts to resist metrics-led evaluation of academics. REF assessments, for example, continue to rely mostly on peer-reviewed judgements.
But Professor Wilsdon rejected the suggestion that the findings of the latest review – dubbed “The Metrics Tide Revisited” – were a foregone conclusion given his views and those of his colleagues: Stephen Curry, assistant provost for equality, diversity and inclusion at Imperial College London, and Elizabeth Gadd, research policy manager at Loughborough University, have written extensively about the limitations of research metrics.
“Metrics are a social technology and, like any technology, can be used for brilliant things or not-so-good ends,” said Professor Wilsdon, who said that there should be “transparency about how we use metrics”.
“It is daft for the research community to set itself in opposition to data collection – after all, it is what we do as researchers ourselves,” he said. “However, there are good reasons for not going down a full metrics-only path for the REF, as it would deliver a different outcome for the sector.”
Using bibliometrics may have some merit in terms of assessing quality for the purpose of resource evaluation but only in scientific subjects, suggested Professor Wilsdon. “It gets close [to peer-reviewed REF scores] for Panel A and B subjects, but it diverges for Panel C subjects [social sciences] and is totally different for Panel D disciplines [arts and design],” he said of the exercise, which is used to distribute about £2 billion a year in quality-related research funding to universities.
The Metrics Tide Revisited group will work alongside the international panel, led by Sir Peter Gluckman, which is currently reviewing the REF, he explained. Among the issues likely to arise are whether “outputs could be further decoupled from individuals” by “moving the analysis [of researchers] further up so you no longer need 34 units of assessment”, said Professor Wilsdon on the further “aggregation” of assessment.
“There is also a lot of interest in research culture and how this might be measured,” he said.
The new review comes as a University of Cambridge study claims the REF is failing to capture much of the impact of research, particularly when it comes to the influence of academics’ social media engagement, and is “due an update”.
“Ask researchers about their most impactful interactions on social media, and you’ll get a much wider range of examples than the REF covers,” explained the study’s author, Katy Jordan, a research associate in Cambridge’s Faculty of Education.
“The official language presents impact as a top-down, outward flow from universities to a waiting public, but this is an outdated characterisation – if it was ever valid at all,” said Dr Jordan.