Change rules to end game-play 2

九月 29, 2006

Which is fairer of the following two systems? First, one in which an institution with a department of 100 academics, 99 of whom are worthy of a 1* research rating and one of a 5* rating, chooses to submit just the one outstanding person, and thus ends up with a 5* rating. Another department of 50 staff rated 3* and 50 rated 5* has all its staff submitted, ending up with a 4* rating. And, second, a system in which all staff must be submitted, so the first department ends up with a 1* (I assume) and the second with a 4*.

The fact that submission strategy alone can cause so much variation in outcome suggests that the research assessment exercise is deeply flawed. How can we trust ratings when universities can choose to play such games? Even if institutions simply wish to maximise future research-based income, how can it make these sorts of judgments when it does not know what the funding criteria are going to be?

The current RAE is said to minimise game-playing, yet anyone who believes this is clearly detached from reality. Or perhaps it's all just a cunning ploy to make us embrace metrics.

Trevor Harley
Dundee University

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT