Research intelligence - Pilot wail: clarification please

Hefce's trial of the REF's impact assessment is highlighting where the problems lie, writes Paul Jump

July 22, 2010

When the universities and science minister announced a one-year delay to the research excellence framework earlier this month, few were surprised. David Willetts has made no secret of his concerns over the inclusion of an assessment of the impact of research.

His stated worries include how much weighting to give the impact assessments - the initially mooted figure of 25 per cent "seems high" to him - and how to assess "bad" impacts and take account of the role of the media in generating impact.

He has also said he is optimistic that the Higher Education Funding Council for England's pilot impact assessment will resolve many of the problems and develop a "robust" methodology that commands wide acceptance among academics.

Yet last month's conference, Impact in the Context of the REF, which reported on the progress of the pilot, suggests there is still much work to be done.

ADVERTISEMENT

The pilot exercise is examining the impact over the past five years of research carried out since 1993 in medicine, physics, English, earth systems and environmental science, and social work and social policy.

Institutions were asked to submit one case study for every 10 academics, as well as an "impact assessment" providing an overview of departmental impact.

ADVERTISEMENT

Submissions are being assessed by panels comprising academics and users. The resulting profiles, plus details of assessment criteria and advice to institutions on providing evidence of impact, will be published in the autumn.

Representatives from some of the participating universities admitted at the conference to being far from sure whether they had highlighted the kinds of impacts Hefce is looking for.

There was a feeling that the relevant kinds of impact vary between disciplines and that separate guidelines would be needed for each one.

Social scientists, for example, worried about how they could show that a piece of legislation influenced by their research had led to a "good" law, and how they could avoid their impact being a hostage to political fortune.

Medics wondered about the importance of the relative amounts of money raised by drug patents, while physicists struggled to individuate their contributions to large collaborative projects.

Research managers complained that it had been extremely laborious to identify and write up case studies and impact assessments.

Circulars to academics asking for examples of impact typically "went in the bin", partly due to a misapprehension that the only relevant impacts were economic.

The University of Oxford resorted to "pulling" information out of academics at meetings. Institutions also found it useful to appoint a "lead academic" with knowledge of colleagues' research - such as heads of departments - to identify possible case studies.

ADVERTISEMENT
ADVERTISEMENT

There were concerns that the 1993 cut-off date postdated some research, particularly in physics, that was only now having an impact.

In addition, institutions reported problems tracking down academics who had moved on since carrying out research in the mid-1990s.

Many of the reporting institutions had trouble gathering evidence of impact. "There is a risk of counting what can be counted rather than what counts," one representative said.

This was exacerbated in medicine and physics by the reluctance of corporations even to admit to having capitalised on university research because the Freedom of Information Act rendered universities unable to guarantee them commercial confidentiality.

Approaches to writing the case studies varied according to subject and institution. Most involved an initial draft by the academic concerned, but research managers typically then entered into a mutually wearisome cycle of rewrites and tugs of war over what Rachel Curwen, a research policy officer at the University of York, called "the language barrier around selling research".

The University of Glasgow, on the other hand, was happy with its approach of using a dedicated team to write reports based on interviews with the academics.

But for all the confusion and angst among the submitting institutions, the chairs of the medical and English panels told the conference that they had been encouraged by how easily they had been able to reach a consensus on the quality of the case studies.

Sir Alex Markham, who chaired the medicine panel, said he had received many excellent submissions and some "absolute howlers". The latter typically involved "distinguished individuals saying: 'Look at my wonderful career'," he said.

Judy Simons, chair of the English panel, echoed his sentiment. "Saying 'I am a world leader' isn't enough. You need names and dates."

Sir Alex warned that most submissions had failed to get across "how universities' strategic decisions and activities had helped to generate the impact".

ADVERTISEMENT

"We don't want to drive serendipity out of the picture, but the impression that pure Brownian motion led to impact didn't go down well," he said.

paul.jump@tsleducation.com.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT