Turning the tide of admin requires hard thinking

The REF is a prime example of the sort of elaborate, burdensome process that potentially adds comparatively little value

October 17, 2019
Desk in the sea
Source: iStock

A few weeks ago, Philip Moriarty, professor of physics at the University of Nottingham, made a startling confession on his blog. He and a colleague had been asked by another physics department to review its mock research excellence framework submission.

Having previously complained publicly about the difficulty of distinguishing 3* (“internationally excellent”) from 4* (“world-leading”) papers, he wanted to “see for myself how the process works in practice”.

He estimates that he and his colleague agreed on 70 per cent of the star ratings. “But what set my teeth on edge for a not-insignificant number of papers…was that I simply did not feel at all qualified to comment,” Moriarty writes. For this reason, he would have declined a journal’s invitation to review them.

So what did he do? “I can’t quite believe I’m admitting this, given my severe misgivings about citation metrics, but, yes, I held my nose and turned to Web of Science,” he confesses.

ADVERTISEMENT

His problem with bibliometrics is a common one: they are “a measure of visibility and ‘clout’ in a particular (yet often nebulously defined) research community; they’re not a quantification of scientific quality”. But this conviction previously led him to some “not-particularly-joined-up thinking” on the REF, embracing the argument that since bibliometrics can’t be trusted in isolation, they have to be supplemented “with ‘quality control’ via another round of ostensibly expert peer review”.

Everyone involved in REF planning ahead of next year’s submission deadline knows what a huge amount of work, stress and expense the exercise creates. Indeed, many academics see it as the worst extreme of the administrative tsunamis that sweep them away every time they approach the lab or the library.

ADVERTISEMENT

In a survey carried out by Times Higher Education earlier this year, academics widely blamed administrative overload for the fact that the traditional 40/40/20 split between teaching, research and administration is no longer possible within reasonable working hours. This week’s cover feature follows up on that perception, asking academics to elaborate on their approach to dealing with administrative overload.

A certain amount of administration is inevitable, especially in an era of external accountability, and academics can’t expect to be entirely exempt. But, as several of our contributors note, universities can be their own worst enemies in this regard – and the REF is a prime example.

Its vast peer review mechanism is clung to by those who share Moriarty’s scepticism about metrics. But if, as Moriarty now suspects, it is no more – and possibly less – accurate than some of the more lighter-touch alternatives, surely it should be reconsidered.

It may be that the rationale for the REF becomes obsolete if, as mooted by THE last week, a big hike in UK research spending is accompanied by the amalgamation of all existing funding mechanisms into one giant scheme, presumably based on project grants – although, as in Australia, the government may still require a national research audit even in the absence of funding consequences.

ADVERTISEMENT

Dorothy Bishop, professor of developmental neuropsychology at the University of Oxford, has suggested distributing quality-related funding, currently dependent on the REF, on the basis of research volume. Previously, she suggested using departmental h-index.

More controversial still would be to make greater use of journal impact factors. These are banned in the REF and they are often decried as the worst bibliometric evil, judging a paper not even by how many citations it receives itself but by the average number garnered by all papers in the same journal. Yet a publication decision is based on careful review of a manuscript by (in the standard scientific case) three people with genuine deep knowledge of the specific subfield.

Yes, the numeric precision of impact factors may be excessive, but the pecking order they reflect is real and well known by editors, referees and authors alike. Surely acceptance for publication by a journal of a certain standing must count for something – however many citations the paper in question garners.

And while some top journals may unduly value novelty at the expense of rigour, no doubt that is true of some REF panellists, too.

ADVERTISEMENT

These are certainly very meaty issues for the new Research on Research Institute (Rori) to chew on. The institute, founded this month by a group including the Wellcome Trust, two universities and a technology firm, fills a long-standing gap in a sector that has been oddly reluctant to apply its critical methodologies to itself. But it is imperative that it examine issues with the same willingness to challenge and overturn ingrained convictions as exemplified by Moriarty.

It should also ponder another of Bishop’s suggestions: that administrators and academics “stop trying to design perfect, comprehensive evaluation systems”. There are no perfect ways to judge anything, after all: not least because there are no objective right answers to be found.

ADVERTISEMENT

In that scenario, the assessment method with the most to recommend it is surely the one that keeps the politicians happy while imposing the fewest administrative requirements and perverse incentives on busy, committed academics as possible.

paul.jump@timeshighereducation.com

  • Would you like to write for Times Higher EducationClick herefor more information.

POSTSCRIPT:

Print the headline: Overcoming the tsunami

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Summer is upon northern hemisphere academics. But its cherished traditional identity as a time for intensive research is being challenged by the increasing obligations around teaching and administration that often crowd out research entirely during term time. So is the 40/40/20 workload model still sustainable? Respondents to a THE survey suggest not. Nick Mayo hears why

25 July

Reader's comments (1)

There are many forms of research assessment working in parallel in universities. While they all have in common is that they are imperfect and time-consuming. Even easily accessible metrics (citation count, h-index, journal impact factor) all have to be slavishly inputted for personal developement review every year, 6 months or even every 3 months. It is entirely unnecessary (as Philip Moriarty points out) to try and determine the precise 'value' of a piece of work. You even wonder whether it is worth the time of publishing at all when each 'output' is evaluated, first by the journal editors, secondly by your closest colleagues in an internal mock REF, thirdly by management who may question the judgement of that process if there are too many 3* and 4*, fourthly by external mock REF reviewers in the sort of exercise Moriarty describes, and fifthly by the REF panel itself. Only in the last of these is the author not faced with a numerical judgement for which they are called to account. This is an absurd state of affairs. Nobody can live like this long-term. I have lost count of the number of my colleagues leaving academia around age 50. The debate over age caps in academia is irrelevant when academic careers are unsustainable because of the regime of surveillance and punition we have created. Dorothy Bishop is right - let's have funding based on a headcount of researchers. Then watch universities scramble to undo all those 'teaching and scholarship' contracts when it is a case of claiming for research funding.

Sponsored

ADVERTISEMENT