How can we trust universities to give us accurate data and stop them manipulating their figures to boost their standing in the Times Higher Education World University Rankings?
This is a common question from senior sources who insist that while their institutions are squeaky clean, others may not be.
It is a serious point. Marney Scully, executive director of policy and analysis at the University of Toronto, has shown how student-to-staff ratios of anything from 6:1 to 39:1 can be generated from the same data simply by playing with the definitions.
As well as using bibliometric data and the results of our global reputational survey, our rankings will also rely to some extent on data supplied by institutions themselves.
To protect the integrity of the process, our data provider Thomson Reuters has taken a number of steps. First, the figures will be checked with recognised national agencies, such as the UK's Higher Education Statistics Agency, wherever possible.
Second, the project boasts a team of full-time data editors who are based in different regions and can draw on local knowledge to "sense check" returns.
Third, Thomson Reuters is committed to making more detailed profile data on each institution available to the academy in some form to allow peer-on-peer checking.
Finally, we plan to use a far wider range of data variables to compile the rankings than in the past. Only six indicators were used for the 2004-09 rankings - including student-to-staff ratio figures (to suggest teaching quality) that were given an excessively high weighting of 20 per cent.
Our new approach means that no single indicator can have too much influence over an institution's final position.