REF non-submission and teaching moments

八月 15, 2013

I find the University of Leicester’s decision to review the contracts of staff not submitted for assessment in the forthcoming research excellence framework shocking (“Memo to staff: REF non-submission may have consequences”, News, 8 August). I agree that any academic who is contracted to carry out teaching and research but fails to engage in research to the highest standards expected has to be “dealt with”. However, the REF submission decisions are not robust enough indicators to trigger such action. Casual comments, strongly held biases, impact factors of journals and the use of anonymous external reviewers with no sense of transparency or accountability are all part of the game in these internal selectivity exercises.

Selectivity in submissions itself is an unnecessary imposition arising from the arbitrary capping of the number of researchers submitted per impact case study. It is further compounded by the childish obsessions that we all seem to have with league tables computed from the profile of REF scores.

Even worse is the fact that the reverse of such judgements of under-performance – those academics who are contracted to do teaching and research but find ingenious ways to avoid undergraduate classrooms – often go unnoticed or even attract substantial rewards in some research-led universities. How fair is that in an economy in which teaching income subsidises research? I would be delighted to see a parallel proposal at Leicester to transfer those who are poor at teaching or who do none at all on to research-only contracts and take the title “professor” away from them.

What has started at Leicester is bound to propagate across the sector. It has to be strongly opposed by the academic community.

Mahesan Niranjan
University of Southampton

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (1)

People who care about genuine quality in research and teaching need to resist simplistic metrics that will increasingly be used by administrators to exercise yet more control over academic staff. REF and the computer tool called Snowball cannot directly measure the quality of research, never mind teaching, so instead it uses weak surrogates such as the citation indices. The later have been strongly criticised by organisations such as DORA and ASCB and we don't need these name-gaming tools.
ADVERTISEMENT