Citation ideas 1

一月 4, 2008

So much noise and so little signal and sense and reflection ("Hepi has 'doubt' over citations", December 14).

  • "Quality" versus "impact": what have the research assessment exercise panels been evaluating for the past 20 years? Whatever that was, in every field tested so far citation counts have turned out to be highly correlated with it.
  • The only way you can interpret and validate a metric is to test it against something you have already validated and already know how to interpret: that is why metrics have to be cross-validated against panel rankings (that's not "peer review", it's panel re-evaluation) in the 2008 parallel metric/panel RAE, metric by metric (but the metrics need to be validated and weighted jointly), with the validation done separately for each discipline.
  • Citation counts are promising metrics, with some history and some demonstrated correlation with the panel rankings, but they are by no means the only metrics. The right way to go about it is to cross-validate a whole battery of metrics.
  • Once cross-validated, the weights on the metrics can be calibrated and adjusted continuously, or at intervals, by subsequent panels, field by field (but not by ritually re-reading four articles by each researcher, as in the prior, profligate RAE; that was not "peer review" in any case; peer review is done by journal referees, selected among the world's top experts, in the case of the best journals, not from a single country's rag-tag generic panel).

We would all like to be weighed in a cosmic balance focused on our own unique merits alone. But we can only hope to get that degree of individual attention from our own parents, because there just isn't enough to go around, if everything is to be weighed in some absolute, omniscient scale. We have no choice but to rely on objective correlates.

We live in an era when an increasingly rich and diverse spectrum of metrics is becoming available online. We should be setting about the task of testing, validating and then calibrating their weights, instead of continuing to air our uninformed a priori judgments about the inadequacy of metrics over and over again.

Stevan Harnad
Professor of cognitive science
Southampton University.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT