Peer-to-peer judgment must be maintained as a core feature of the system being designed to allocate more than a billion pounds a year in research funding, some of the sector's most senior research figures have warned in a series of interviews with Times Higher Education.
Frank personal perspectives on the future of research funding have been given by 11 chairs of the subpanels that are assessing the quality of UK research for the 2008 research assessment exercise.
After this year's RAE, a new system to allocate funding will be set up, judging research quality using a combination of peer review and "statistical indicators", such as counting the number of times an academic's published work is cited in journal papers (bibliometrics), and the number of research students a department attracts.
The chairs, representing a wide range of disciplines, express concern about the reduced influence of peer review under the forthcoming research excellence framework (REF). They also raise worries about the rigid application of metrics, such as citation counts, to assess quality and determine funding.
Panel chairs have not commented collectively on the REF on this scale before. Keith Richards, chair of the geography and environmental studies subpanel (see box below), even argues that as bibliometrics are unsuitable for use in some subjects, they should be rejected in them all.
The Higher Education Funding Council for England (Hefce) is still developing the detailed model for the REF. Hefce said that the system would combine statistical indicators, including bibliometric indicators such as citations "wherever these are appropriate" with "light-touch expert review" and non-numerical information. The balance between these different measures will vary "as appropriate" for each subject.
It launched a pilot bibliometrics exercise and will be informally consulting the sector shortly with the aim of bringing forward firm proposals in spring next year.
"The general shape of the framework is clear enough, the question now is the detail," said Paul Hubbard, Hefce's head of research policy.
He said the proposals would "say clearly" how bibliometrics would be applied and which suite of indicators, "qualitative and quantitative", Hefce proposes to use. Hefce is then likely to request further advice on how the suite of indicators will be used in different subjects.
Bahram Bekhradnia, director of the Higher Education Policy Institute, said that the overriding message from the chairs - that metrics could play a part but under no circumstances could reliable assessments be based on metrics alone - was "common sense". "This of course is the way the RAE was developing anyway ... it is a pity it has taken so long and so much blood has had to be spilt to arrive at this realisation," he said.
WHAT THE EXPERTS SAY
David Otley, Lancaster University
Chair: main panel covering economics/accounting/business studies/information management
I believe research quality can be reliably assessed only using expert peer review, although this would benefit from being informed by suitable metrics. Bibliometrics (citations) would clearly be helpful, although it needs to be recognised that citation databases are inadequate, and that good citation evidence lags publication by several years. Citation patterns also differ by subdiscipline and need careful interpretation by experts. The use of such metrics without proper expert review would lead to poor decisions, and also carries the danger that new researchers would orient their work towards areas that tend to collect high citations, reducing the diversity and originality of UK research.
Keith Richards, University of Cambridge
Chair: geography and environmental studies
Some quantitative indicators can be used, just as they are in the RAE. But geography, like most disciplines, has internal diversity; its subfields have varied sizes, traditions and publishing characteristics, with members publishing in a variety of forms. Citation counts will be normalised in the REF, against counts for a range of "relevant" journals. It is impossible to see how that range can be defined to reflect the diversity of a subject, especially if groupings larger than main panels are used. The result will be to distort authorial practice and transfer excessive power to certain editors and publishers. All academics should unite to reject bibliometrics. Convenience for some should not outweigh the intellectual grounds of those who wish to retain peer review because it respects diversity.
Hugh McKenna, University of Ulster
Chair: nursing and midwifery
Peer review informed by bibliometrics (citations) and quantitative and qualitative research environment data would be my option for nursing and midwifery. This would take account of our emphasis on user and applied research, and enable it to be handled more sensitively than with a purely citation-based approach. The current RAE does take account of quantitative metrics on research staff and students, number and types of studentships and amount and types of income, and these should continue to be assessed in the REF. Qualitative indicators relating to the ethos and strategic focus should be introduced.
Peter Taylor-Gooby, University of Kent
Chair: social work and social policy and administration
Peer review should predominate in research assessment because it is best fitted to command the range of sources, methods, activities and demands in the field, and identify significance and originality in a rapidly changing context. The most useful supportive metrics concern research student completions and research income, taking into account the wide range of relevant sources of funding. Citation counts are difficult to apply, since it is hard to define a unitary established research community against which to normalise them. Attempts to do so may weaken the capacity to recognise innovation and damage the UK's leading international status in work in this area.
Celia Wells, Durham University
Chair: law
My panel emphasises the value of detailed peer review of outputs above all other indicators of quality. We do not regard secondary indices of quality, such as journal ranking or citations, as helpful proxies for quality. Many high-quality legal outputs are published in monographs and in edited collections for which citation databases do not exist, and are unlikely to exist for the foreseeable future. Research income and research student completions are relevant to understanding the operation and aspirations of a law school but they do not in themselves provide evidence of the quality of research activity. We firmly believe that our approach is essential if we are to ensure that the legal academic community continues to have confidence in the process.
Bernard Silverman, University of Oxford
Chair: statistics and operational research
Whatever system or combination of metrics or peer review is decided, there is a need for an overarching process in which the different aspects are combined, not merely by a mathematical process of addition or weighted averaging, but by a more holistic assessment of the total contribution, quality and potential of the research. The whole cannot simply be assessed as the sum of the parts.
Catherine Davies, University of Nottingham
Chair: Iberian and Latin American languages
Citations are not useful indicators of research excellence in the arts and humanities but other metrics - for example, overall research income, numbers of postgraduate students, postgraduate scholarships, numbers of research active staff, overall number of research outputs - are helpful but only if used in conjunction with peer review. The (peer review) process is laborious but there is no way round it to identify quality. A 500-page book might be vacuous, a 25-page article earth-shattering, so only experts in the subject can make the assessment of outputs.
Sandy Heslop, University of East Anglia
Chair: history of art, architecture and design
RAE 2008 has provided a good model for the future: peer review of outputs examined in detail, with metrics playing some part in the assessment of environment and esteem but not in judging the quality of research per se. I do not believe there is a viable alternative for humanities. None of the other ways of monitoring excellence so far proposed is a substitute for detailed expert review and bibliometrics (such as citations) are an unreliable guide to quality. The best way to lighten the touch is to disaggregate environment and esteem from research quality as there is no clear and direct link between them. A way forward might be to develop a biennial rolling programme for assessing outputs alongside a periodic review, say every six years, of environment and esteem.
Alan Silman, University of Manchester
Chair: epidemiology and public health subpanel
Based on RAE in 2001, grades could be well reflected by metrics such as citations, student numbers and grant income. There is a substantial subjective element in the current process, which is inevitable given the nature of the peer review process (which our sub-panel has done its best to eliminate by blind double marking). I believe that for a biomedically based area such as ours a bibliometric approach is likely to be more successful than one, say, in the humanities. I think all qualitative indicators of global research quality are very difficult to assess.