A metrics-based REF: from sexennial pain to permanent headache

Abandoning peer review could lead to never-ending assessment, micromanagement of staff and übergaming, warns Howard Hotson

July 9, 2015
Parko Polo illustration (9 July 2015)
Source: Parko Polo

In the aftermath of the 2014 research excellence framework, the blogosphere is abuzz with proposals for going metric next time around. Some advocate this change out of aesthetic distaste at the messiness of the current system, others out of moral disdain for institutions claiming undeserved primacy; but the most popular argument is that research assessment based on peer review is costly, laborious and time-consuming, while a metric-based alternative could produce similar rankings more efficiently, leaving senior academics free to get on with their research.

Cost-effectiveness is important in our neoliberal age. But is internalising this measure the best way to defend academic values? Is the worst thing about the REF really that it takes time? Or is that cardinal sin rather its distortion of academic priorities and subversion of self-governance?

Research rankings, after all, are not imposed simply out of innocent curiosity to know which departments are best, or even to reward the best ones with the most funding. This may have been the original intention, but the effect was to create a management tool. New public management requires agreed standards on which to base managerial decisions. Within the instrumentalist mindset of the thoroughly modern manager, what matters is not that these standards are valid but that they are authoritative.

Once an official currency for measuring academic value has been established, academic judgement – with all its unavoidable complexity – can be replaced with the much simpler, more business-like exercise of maximising the metrics. This is the mechanism by which the REF has already helped render most academic institutions of collective self-governance obsolete, by transferring control of faculties and departments to the managers who read the metrics. But peer review still retains academic judgement for the core task of assessing the value of individual publications. Replacing it with a metric-based regime will undermine individual as well as collective self-governance, surrendering that most basic of academic freedoms – the ability to determine one’s own research agenda – to a new managerial instrument far more invasive in three main ways.

ADVERTISEMENT

First, a metrics-based REF will radically increase the tempo of assessment. The current REF, because it is so cumbersome and costly, can be undertaken only episodically: once every six years, or preferably even less often. Metric-based assessment, by contrast, can be undertaken annually, termly, monthly, weekly – as frequently as the latest updates of the Web of Science can be fed through a pro vice-chancellor’s supercomputer. From being a sexennial pain in the neck, a metrics-driven REF will become a permanent condition.

Second, this never-ending cycle of assessment will then be imposed on every individual researcher. Because peer review is a collective undertaking, the current REF is divided into 36 cumbersome panels charged with forming holistic assessments of entire departments. But the new metrics will apply to individual publications. By aggregating every researcher’s output, metric-based research assessment will permit the micromanagement, not merely of departments but of individual researchers.

ADVERTISEMENT

Third, this continuous assessment of individuals will employ incomparably more finely calibrated criteria than the current REF. Because peer review panels record the rough consensus of small groups of fallible mortals, they measure publications in terms of five crude levels of quality, ranging from “unclassified” to “world-leading” (4*). Not so for publication metrics. Every single altered variable in the complex calculation of “impact factors” – one more page view, one more download, one more citation – will add an iota of value to the officially defined quality of each and every article, which will thus become pseudo-scientifically measurable to as many significant figures as managerial whim dictates. You think gaming the REF is disgraceful now? Just wait until the fate of whole disciplines depends on their ability to ratchet up numbers by fair means or foul.

So imagine, if you will, a brave new world, perhaps only a decade hence, in which the blunt instrument of six-yearly departmental peer review has been replaced by continuous, computer-controlled laser surgery on individual research agendas. In this metricised wonderland, managers will be able to track and graph minute changes in an individual researcher’s profile on a day-to-day basis. The trajectory of every paper, from the moment of publication, will be measured against a set of arbitrary standards. Those colleagues who fall short will be pushed into areas with better metrics, or eased out of research, or dumped from the profession altogether. In this way, whole fields will be incrementally reshaped, whole faculties restructured, whole departments downsized or abolished. The cumulative effect will be steadily accelerating homogenisation. Every outrider sucked into the vortex, every field of slow scholarship killed off to feed the “best performing”, overpopulated and hyperactive subdisciplines, will increase the centripetal mass at the centre of the system, like a dying star collapsing into a black hole.

Looking back from 2025 to 2015, will our younger colleagues – or our older selves – thank today’s senior academics for relieving them of the inexpressible tedium of reading the best work produced in their field every six years? Or will they curse the lazy short-sightedness with which we internalised the logic of neoliberal managerialism, cast aside our last chance to play an active role in the management of our own intellectual lives, and enslaved our profession to a witless machine of our own making?

Howard Hotson is professor of early modern intellectual history, University of Oxford.

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Amid concerns about the growing use – and abuse – of quantitative measures in universities, a major new review examines the role of metrics in the assessment of research, from the REF to performance management

9 July

Reader's comments (1)

To see how badly wrong metrics can go in another context: http://www.wired.com/2015/07/googles-ad-system-become-big-control/?mbid=nl_7815

Sponsored

ADVERTISEMENT