Alison Wolf

五月 26, 2006

There are extraordinarily few ways to fund university research.

You can leave it to the market and individual contracts with businesses and government departments. You can rely on private philanthropy. Or you can set up a dedicated stream of public financing. Most countries, including the UK, do all three. They also argue about how to distribute the third type of funding, with our complaints centring on the research assessment exercise.

Handing out general funding for research was simple when the system was small, and is much more vexed today. Moreover, the options are again pretty limited. Governments can use a formula. They can rely on expert peer group judgment, or they can leave it to bureaucratic (Civil Service) discretion. That is about it.

Many countries hand out large sums of money using expert peer review of competitive bids to research councils. This involves a lot of time for the referees; but it has enormous advantages, and not just in the quality of decision-making. It also allows you to combine national research funding with a "mixed" system (for example, of sub-national regions and states with their own different funding systems; public and private institutions; or universities and institutes).

However, competitive bids of this sort are better suited to fairly self-contained, large empirical projects than to maintaining research infrastructure, or to the sort of scholarly research that involves oneself, the library and the internet. If a country believes all academics should do research, then some sort of formula based on head counts is simple and administratively economical. So traditionally in the UK and elsewhere, "core" university funding, based on numbers of students or academics (with or without subject weightings), has supposedly built in time, and facilities, for research. Formula funding, in other words.

This can work well if your system is small because you can afford to be generous. It is also fine if you do not care about research quality and excellence. You can claim that the formula provides enough funding for academic research even when it obviouslyJdoes nothing of the sort. But what happens if you want to have a publicly funded mass system and research excellence?

As this newspaper commented a few weeks back, our RAE "has few defenders inside higher education or in Whitehall". However, it has always struck me as a device of administrative genius allowing the state to fund universities unequally while maintaining that it treats everyone the same (because all students in a given subject get the same notional teaching allocation). Like many other recent UK public-sector reforms, other countries tend to study, envy or copy it: the rule in life being that we are all far more aware of the problems with our own systems than with other people's.

Also, in relying on peer review, the RAE has bucked the trend. In a system awash with targets and formulae, that is remarkable. Now, however, the Government has proposed moving to a formula for research as well as teaching, albeit one that is explicitly based on "quality" rather than head counts, and probably with a heavy weighting for research council grants.

It points to a very high correlation between these and RAE scores - as indeed one would expect if peer review is at all reliable. However, the fact that institutions tend to have the same relative position (high or low) on two measures ensures that the two are highly correlated but tells you nothing about their absolute values. Sure enough, as the Higher Education Policy Institute has shown, a metrics-based system would produce major and unstable changes in the grants that different universities receive.

Different metrics lead to different results. Hepi simulated the result of a variety of formulae, and one can spend a happy hour looking at just how badly one's own or one's friends' universities fare under the alternatives.

More important, vice-chancellors ought all to have learnt by now that quantitative formulae are fundamentally problematic, though their responses to the proposal suggest otherwise. Different measures (grants, publications, citations) are highly sensitive to the way they are counted.

And their use creates perverse incentives. Chasing one particular type of grant (or piling up numbers of publications) may be less obviously harmful than reducing waiting time in accident and emergency by keeping ambulances waiting outside. But both are distortions produced by the same management tool.

Of course, if the alternative to the RAE is not peer review but bureaucratic discretion, we should still opt for a formula. Or are there any takers for the only other alternative? How about a research-funding lottery?

Alison Wolf is Sir Roy Griffiths professor of public sector management at King's College London.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.