Open, improving and accountable

十二月 20, 1996

Britain's university research assessments provide a once-in-four-years snapshot of its academic research endeavour, a picture open for the world to see. The 1996 results show both the level of activity and the quality of research rising. They show a dynamic higher education system which can boast two supremely excellent, large universities - Oxford and Cambridge - locked in a long-running battle for top place, a duel which probably sharpens the performance of both. Alongside Oxford and Cambridge are three other institutions - LSE, Imperial and University College, London which consistently top the league tables at each research assessment with excellent world-class work across the whole range of what they do.

And they show a number of general universities, big civics and smaller 1960s foundations, which have a high proportion of their staff - usually more than 80 per cent - engaged in research with little of it rated below grade 3.

But perhaps most important is that the ratings also show that the system enables growth and improvement. Bristol, shocked to find itself rather low down in 1992, has moved up smartly. Goldsmiths, once at the bottom of the old university table, has improved spectacularly. Cardiff seems to be well set to achieve its aspiration to be Wales's research leader. Bath and Lancaster have improved sharply.

Ratings for the new universities show the possibilities for growth. It is, as it should be, possible to establish excellent research teams anywhere two or three are gathered together. This would be unlikely if we were to adopt a rigid hierarchy with designated research universities. Seven departments in new universities or colleges won 5 ratings this time compared to one (Westminster) last time. Two of them are at John Moores. At Thames Valley the linguistics team is rated 5.

Cynics will no doubt say - as they do whenever improved school exam results are published - that better performance obviously means grade drift rather than genuine improvement. They may also put down better ranking to academics' skill at mastering whatever game is in town. To some extent the cynics may be right. Oxford has certainly improved its average score compared to last time by not entering so high a proportion of its staff. Aberdeen, Edinburgh and Lampeter entered more.

But it is equally plausible that the great research race has got more people out of bed earlier and in front of their word processors. Publish-or-perish was once seen as a troublesome transatlantic disease to be avoided. Now publish-or-administer is a threat which is as motivating for individuals as money is for institutions. The research output from British universities has increased and its quality has probably not diminished, despite the temptation to salami-slice findings, recycle articles into books (both reduced by the four-items rule) and ride piggyback on research students.

Overall, the picture is of a system which is gradually differentiating both between institutions and within them. Pages iii-vii show at a glance which are the overwhelmingly research-dominated institutions. They also show differentiation within universities. In many, mainly new, universities the majority of academics are not expected to be research active, but typically between quarter and a third of staff are engaged in research.

The bar charts, which show the quality of the research that is being done, will in some cases raise the question of whether it is really worth the institution's while to divert staff time to an activity which is rated low. But when this year's figures are set alongside the 1992 figures (the first year in which the former polytechnics were assessed alongside older universities), it is evident that relatively small investment has produced marked improvement. Hertfordshire, for example, with strongly applied research departments (see page 7) seems to be consolidating a useful level of activity which will underpin local and regional activity and bring in industrial funding. If value added and the fostering of new research growth were the Government's criteria, the record of the new universities should bring rewards.

The institutional data has not been officially provided in this readily accessible form before. In previous years all the emphasis was on subject ratings - a discipline-based rather than an institutional competition.There will be many criticisms of the way in which panels have reached their verdicts; whether assessors really had time to read all the material submitted and look for cheats; whether sufficient weight was given to applied work, and so on. With so much in terms both of funding and prestige at stake, the disappointed will naturally seek flaws. But the data is certain to form the basis of important judgements in the next few years.

What judgements and by whom? Quickest to draw conclusions will be the market. Subject ratings, giving both quality and numbers by department, are of particular value to consumers, providing the information needed by potential students, graduate recruiters or research sponsors looking to pick a big graduate school, a high-prestige department, or a likely source of leading-edge research. Britain will now, for example, be seen to be a world leader in biochemistry, if it was not already.

Because the assessments are produced by experts in the subjects concerned, there is also scope for cross-discipline debate. Are our physicists really 0.7 units of assessment better than our chemists, as the table on page ii suggests? Are our experts on the Middle East and Africa more than a unit better than the people who study Europe and America? And it shows that universities which have been absorbing nursing colleges have done them no favours by putting them into the exercise. Nursing has come bottom of the pile by some distance, raising the whole question of the nature of nursing research.

When subject research ratings are set alongside teaching quality assessments, as we have done on the THES Internet site, THESIS, (a service not available from any other source), the top universities for particular subjects in both teaching and research can be identified clearly. This has a number of effects. First it is no longer possible for politicians to accuse universities of being a black hole, unaccountable for the public money they receive. British higher education is now uniquely open and accountable.

Second, the information now available worries institutions which do less well, by making it obvious that all universities are not performing work of the same standard. Third, it provides the market with the information it needs to function effectively and will therefore accelerate the rate of differentiation. This information, used round the world by customers, is what is needed to counter misleading over-selling of the kind identified by the Higher Education Quality Council is its report published this week (page.1) This market discipline is harsh. It is however the price of the flexibility which is allowing newer universities to develop. It is the mechanism whereby departments or institutions rise or fall over time. Governments may be tempted - egged on by some winning institutions - to use the accumulating data to press for a designated tier of research universities. This must be resisted. They may equally be tempted to use the data for closer planning of provision in general. This too should be resisted. This year's RAE shows that institutions manage well and that there is much growth and development in the system. The councils (and the Government) should be content that they have devised a useful system of incentives and let the institutions get on with the planning.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.