Please rank responsibly

Phil Baty reports on a declaration on world university league tables from a consortium of Latin American university rectors agreed in Mexico City

June 11, 2012

It was a rare spectacle: a senior administrator of a leading international university, speaking at a conference of peers, issued a public “thank you” to those who compile university rankings. The rankers – me included – more typically face criticism of the power and influence we wield.

But Chen Hong, director of the office of overseas promotion at China's Tsinghua University, told the World 100 Reputation Network conference in Washington in May: “We should thank those organisations who publish these indicators. At least we can find something for comparison and benchmark our own performance.”

Reflecting the approach that my magazine, Times Higher Education, has taken to disaggregate the overall composite ranking scores in our publications, she explained: “What is useful for us is the detailed indicators within those rankings. We can find out comparable data, benchmarking various universities and use them for planning.”

Indeed, there is growing evidence that global rankings – controversial as they are – can offer real utility. But those of us who rank must also be outspoken about the abuses, not just the uses, of our output.

ADVERTISEMENT

There is no doubt that global rankings can be misused.

It was reported recently, for example, that a $165 million Russian Global Education programme would see up to 2,000 Russian students each year offered “very generous” funding to attend institutions around the world – but that qualification for the generous scholarships will be dependent on the students attending an institution in the top 300 of the Times Higher Education World University Rankings. Brazil’s hugely ambitious Science Without Borders scholarship programme to send 100,000 Brazilian students overseas similarly links the scholarships to THE-ranked institutions.

ADVERTISEMENT

While such schemes offer a welcome endorsement of the rigor of THE’s rankings data (provided by Thomson Reuters) and its ranking methodology, speaking as the (rather flattered) editor of the THE rankings I'd still suggest that they are ill-advised.

Global university ranking tables are inherently crude, as they reduce universities to a single composite score. Such rigid adherence to the rankings tables risks missing the many pockets of excellence in narrower subject areas not captured by institutionwide rankings, or in areas of university performance, such as knowledge transfer, that are simply not captured well by any ranking.

One of the great strengths of global higher education its extraordinarily rich diversity, which can never be captured by the THE World University Rankings, which deliberately seek only to compare those research-intensive institutions competing in a global marketplace and which include less than 1 percent of the world’s higher education institutions.

In this context, a new declaration from a consortium of Latin American university rectors agreed in Mexico City last week must be welcomed as a sensible and helpful contribution to the rankings debate. The declaration, agreed at a two-day conference at the National Autonomous University of Mexico, titled Latin American Universities and the International Rankings: Impact, Scope and Limits, noted with concern that “a large proportion of decision makers and the public view these classification systems as offering an exhaustive and objective measure of the quality of the institutions.”

The rectors’ concern is of course well-placed – no ranking can ever be objective, as they all reflect the subjective decisions of their creators as to which indicators to use, and what weighting to give them. Those of us who rank need to work with governments and policy makers to make sure that they are as aware of what rankings do not – and can never – capture, as much as what they can, and to encourage them to dig deeper than the composite scores that can mask real excellence in specific fields or areas of performance. That is why I was delighted to be in Mexico City last week to joint the debate.

The meeting, which drew together rectors and senior officials from 65 universities in 14 Latin American countries, issued a call to policy makers to “avoid using the results of the rankings as elements in evaluating the institution’s performance, in designing higher education policy, in determining the amount of finance for institutions and in implementing incentives and rewards for institutions and academic personnel.”

ADVERTISEMENT

I would – to a large extent – agree. Responsibly and transparently compiled rankings like THE’s can of course have a very useful role in allowing institutions, like Tsinghua and many, many others, to benchmark their performance, to help them plan their strategic direction. They can help governments to better understand some of the modern policy challenges of mass higher education in the knowledge economy, and to compare the performance of their very best research-led institutions to those of rival nations. The rankings can help industry to identify potential investment opportunities and help faculty member make career and collaboration decisions.

But they should inform decisions – never drive decisions.

ADVERTISEMENT

The Mexico declaration said: “We understand the importance of comparisons and measurements at an international level, but we cannot sacrifice our fundamental responsibilities in order to implement superficial strategies designed to improve our standings in the rankings.”

Some institutional leaders are not as sensible as those in Latin America.

Speaking at the same Washington conference where Chen Hong gave thanks to the rankers, Pauline van der Meer Mohr, president of the executive board at Erasmus University, Rotterdam, confirmed frankly that proposals for a merger between her institution and Dutch counterparts the University of Leiden and the Delft University of Technology were “all about the rankings.”

The three Dutch institutions calculated, she explained, that merged as one, they would make the top 25 of world rankings, while separately they languish lower down the leagues. “Why would you do it if it doesn't do anything for the rankings?” she asked.

But the merger did not take place. It was dropped because of a mix of political unease, fierce alumni loyalty to the existing “brands,” and an “angry” response from research staff. Researchers at all three institutions, van de Meer Mohr admitted, had asked: “You are not going to merge universities just to play the rankings game?” To do so, they had concluded, would be “ridiculous.”

I believe that those Dutch academics were quite right.

ADVERTISEMENT

• This article first appeared in Inside Higher Ed.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT