Rankings 09: Measures matter

Rankings are here to stay - the challenge is to make them more accurate and useful, argue Jamil Salmi and Roberta Malee Bassett

October 8, 2009

"Things which are perceived to be real will be real in their consequences." William I. Thomas

The proliferation of tertiary league tables and university rankings exemplify William Thomas' maxim about the power of perceived legitimacy. Two innovations have propelled the power of rankings into household concepts and, indeed, tools for decision-making: the 1983 debut of the US News & World Report Best Colleges ranking; and the 2003 launch of the Shanghai Jiao Tong University (SJTU) Academic Ranking of World Universities.

The US News ranking emerged in conjunction with the "massification" of US higher education. As greater numbers of students were researching tertiary opportunities, the US News ranking provided them with accessible, targeted information. The SJTU ranking took this idea to another level, expanding the comparison beyond one nation's borders. US News took the discussion from local to national, and SJTU took it global. When Times Higher Education, with QS, introduced its own World University Rankings in 2004, it was clear that rankings were here to stay and were making the world of tertiary education simultaneously larger and smaller.

University league tables proliferated and grew in power and prominence amid a period of ferment in higher education. They emerged from a dynamic drive for greater accountability as, from the 1980s, enrolments began to rise globally. At the same time, public spending on higher education began attracting ever more scrutiny, from Ronald Reagan in the US, Margaret Thatcher in the UK and, eventually, from many others. In an environment of higher demand and more competition for resources, seemingly quantitative and data-driven tools such as rankings served a need for evidence of relevance on the part of institutions.

ADVERTISEMENT

Despite the many controversies surrounding rankings, their popularity is undeniable. The concept of institutional rankings has spread globally: Maclean's in Canada, Asiaweek, La Repubblica in Italy, The Times Good University Guide in the UK and Excelencia in Spain are just a few examples. Shared characteristics of rankings and league tables include the use of a set of weighted indicators; a rank order that implies hierarchical differences; the identification of a specific unit of comparison (institution or programme, for example); and the proliferation of reputational inputs based on stakeholder surveys, adding subjectivity to a seemingly objective exercise.

Often overlooked in the debate about rankings is the fact that the methodologies that underpin the individual rankings shift and evolve from year to year for many, arguably legitimate, reasons, often altering the outcomes of the rankings in dramatic ways. However, the consumers of the rankings - students, politicians, university leaders and researchers - are often unaware of these changes. Any dramatic shift in outcome from year to year should lead to a greater appreciation of the power of methodological alterations, rather than the belief that institutional quality can change dramatically between years. For this simple reason, the impact of methodology on outputs in rankings ought to be of primary concern for ranking professionals.

ADVERTISEMENT

Things can only get better

As acceptance - begrudging or otherwise - of rankings has settled into the tertiary education environment, the debate has moved on to how to improve their methodology to provide more useful and legitimate data on which to base well-informed decisions. Several innovations promise to bring a different flavour to the rankings menu. These include the German CHE-HochschulRanking, which does not actually rank institutions but allows for self-directed comparisons of institutions; rankings based on actual learning outcomes such as the Organisation for Economic Co-operation and Development's (OECD's) planned assessment of higher education learning outcomes (AHELO); the Lisbon Council's ranking of 17 OECD-country university systems; and our own efforts within the World Bank to develop a benchmarking tool that would provide access to key results and input indicators based on comparable data.

It is imperative that those who produce the rankings continue to create and refine user-friendly mechanisms for reliable comparisons across institutions and systems. And, equally, it falls on the shoulders of consumers of rankings to question and examine the information being presented to them. Hopefully, expanded critical examinations of the methodology and interpretation of rankings by academics, consumers and policymakers will contribute to their continual improvement as information and guidance instruments for their numerous stakeholders, as has been seen in their brief history thus far. This is good news for both the producers and the consumers of league tables.

Times Higher Education-QS World University Rankings 2009: full coverage and tables

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT