All-round excellence

When it comes to global league tables, the Times Higher Education World University Rankings offer the most comprehensive approach. Phil Baty reports

April 6, 2011

I am back in Hong Kong this week for the Association of Commonwealth Universities’ Conference of Executive Heads, and I’m back on a platform to debate the growing and controversial influence of global rankings. Here is an edited version of the essay I wrote for the ACU’s Bulletin this month, to help get the discussions started.

University rankings are always controversial.

They are highly influential and of keen interest to students, faculty, university administrators and policymakers around the world, but they face a great deal of criticism, too: there is too much movement from one year to the next; they count what can be measured, but do not measure what counts; they encourage conformity and fail to capture the diversity of institutional missions.

Worse, they reward the already powerful Western research elite and fail to capture excellence in the developing world.

Much of this criticism is valid, and some of it cannot be easily refuted. But I’d suggest that there is at least one global university ranking system that is seeking to capture a wider range of university activities and striving to see past simplistic research-driven tables.

ADVERTISEMENT

That system is the Times Higher Education World University Rankings, powered by data from Thomson Reuters.

Of course, there is strong work being done by other providers. Shanghai Jiao Tong University’s Academic Ranking of World Universities is objective, stable and useful – but best suited to those who want to consider a narrow picture of research power. Its six indicators are restricted purely to research, almost exclusively in science.

ADVERTISEMENT

The Higher Education Evaluation and Accreditation Council of Taiwan’s table is also impressive, but is based solely on journal articles. Spain’s Webometrics Ranking of World Universities also has clear value – but only for institutions seeking to monitor their global online visibility, an increasingly important issue in a world where brands really matter.

These tables provide useful information, but THE’s rankings are the most comprehensive. They recognise a wider range of what global universities do: pushing the boundaries of understanding and innovation with world-class research; sharing expertise with the wider world through “knowledge transfer”; working in an international environment and competing for the best staff and students wherever they may be; and, crucially, providing a rich and enriching teaching environment for undergraduate and postgraduate students.

THE magazine built on its six years of experience in the global ranking game – and its 40 years of experience in reporting on higher education – when revamping its offering. It worked with its readers and a dedicated panel of experts to publish a new set of tables with a radically revised methodology on 16 September 2010.

The tables are the result of a global survey of user needs and 10 months of open consultation, and were devised with expert input from more than 50 leading figures from 15 countries, representing every continent. They use 13 separate indicators – more than any other global system – to take a holistic view.

The World University Rankings place the most weight on a range of research indicators. We think this is the correct approach in a world where governments are investing heavily in developing the knowledge economy and seeking answers to global challenges such as climate change and food security.

We look at research in a number of different ways, examining reputation, income and volume (through publication in leading academic journals indexed by Thomson Reuters). But we give the highest weighting to an indicator of “research influence”, measured by the number of times published research is cited by academics across the globe.

We looked at more than 25 million citations over a five-year period from more than five million articles. All the data were normalised to reflect variations in citation volume between different subject areas, so universities with strong research in fields with lower global citation rates were not penalised.

We also sought to acknowledge excellence in research from institutions in developing nations, where there are less-established research networks and lower innate citation rates, by normalising the data to reflect variations in citation volume between regions. We are proud to have done this, but accept that more discussion is needed to refine this modification.

ADVERTISEMENT
ADVERTISEMENT

The “research influence” indicator has proved controversial, as it has shaken up the established order, giving high scores to smaller institutions with clear pockets of research excellence and boosting those in the developing world, often at the expense of larger, more established research-intensive universities.

We judge knowledge transfer with just one indicator – research income earned from industry – but plan to enhance this category with other indicators.

Internationalisation is recognised through data on the proportion of international staff and students attracted to each institution.

The flagship – and most dramatic – innovation is the set of five indicators used to give proper credit to the role of teaching in universities, with a collective weighting of 30 per cent.

But I should make one thing very clear: the indicators do not measure teaching “quality”. There is no recognised, globally comparative data on teaching outputs at present. What the THE rankings do is look at the teaching “environment” to give a sense of the kind of learning milieu in which students are likely to find themselves.

The key indicator for this category draws on the results of a reputational survey on teaching. Thomson Reuters carried out its Academic Reputation Survey – a worldwide, invitation-only poll of 13,388 experienced scholars, statistically representative of global subject mix and geography – in early 2010.

It examined the perceived prestige of institutions in both research and teaching. Respondents were asked only to pass judgement within their narrow area of expertise, and we asked them “action-based” questions (such as: “Where would you send your best graduates for the most stimulating postgraduate learning environment?”) to elicit more meaningful responses.

The rankings also measure staff-to-student ratios. This is admittedly a relatively crude proxy for teaching quality, hinting at the level of personal attention students may receive from faculty, so it receives a relatively low weighting of just 4.5 per cent.

We also look at the ratio of PhD to bachelor’s degrees awarded, to give a sense of how knowledge-intensive the environment is, as well as the number of doctorates awarded, scaled for size, to indicate how committed institutions are to nurturing the next generation of academics and providing strong supervision.

The last of our teaching indicators is a simple measure of institutional income scaled against academic staff numbers. This figure, adjusted for purchasing-price parity so that all nations compete on a level playing field, gives a broad sense of the general infrastructure and facilities available.

ADVERTISEMENT

We are proud of our new and improved rankings, but will continue to engage with our critics and to take expert advice on further methodological modifications and innovations. That is why I am delighted to be presenting and debating the issues at the ACU’s Conference of Executive Heads in Hong Kong.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT