"Is the ratio of servers to customers in a restaurant a good proxy for the quality of the food?"
This was one of the rather scathing rhetorical questions posed by a US academic when I invited readers of the US online publication Inside Higher Ed to critique our old World University Rankings methodology (http://bit.ly/bt7EFN).
He was referring to our use of staff-to-student ratios (SSRs) as a proxy indicator of teaching quality in the old rankings (2004-09). Some 20 per cent of the overall ranking relied on SSR counts.
"To think that such a ratio could signify 'teaching quality' shows how serious a problem we face with rankings that privilege the availability of a metric over its validity," the academic said.
He is, of course, right. The same point was made in a paper from the Russian Rectors' Union, handed to me by Victor Sadovnichiy, president of Moscow State University, earlier this month.
It argues that "good teachers always have a lot of students, bad teachers have few".
SSR figures are also easy to manipulate and hard to verify.
David Graham, provost of Concordia University in Canada, opened the web discussion by highlighting research that shows that a ratio of anywhere from 6:1 to 39:1 can be achieved with the same institution's data.
But as teaching is a fundamental part of what universities do, an indicator of teaching quality is essential to a well-balanced ranking.
We are asking about teaching quality in our reputational survey for the 2010 rankings, and the use of SSR figures is under review - but we accept that our previous 20 per cent weighting for such a crude indicator was simply not appropriate.