An assessment tool to be used with care

Rankings are widely used and critiqued, says Ellen Hazelkorn, but they create a profile institutions wantd

十月 1, 2014
An assessment tool to be used with care

Over the past decade, much has been written about the rankings phenomenon. Its arrival coincided with the intensification of competition in the global economy and increasing cross-border student mobility. Rankings have also filled an information deficit.

Although there are various evaluation and benchmarking instruments around, there has been growing dissatisfaction with their robustness. By placing higher education institutions within a wider international and comparative framework, rankings have managed to say something about quality in a simple, accessible and provocative way.

Rankings are widely consumed, but they are also broadly critiqued. Yet regardless of criticism, the key message is that more attention needs to be paid to how we assess and compare institutions and what these assessments mean in terms of global competition and the world order. The complexity of these issues helps to explain why rankings remain such a powerful influence on policymakers and higher education.

The key findings from a global survey I conducted this year show a continuation of trends I previously identified in 2006: although students remain the primary audience for the rankings, university leaders believe their importance for government and public opinion is rising. Even Moody’s and Standard & Poor’s use rankings to validate university creditworthiness.

Overall, the trend is for higher education leaders to desire a much higher institutional rank, nationally and internationally, than they currently hold. Despite the statistical impossibility and financial cost of everyone achieving this goal, this has not stopped a number of institutions (as well as ministers and policymakers) worldwide proclaiming a particular ranking position as a strategic ambition.

Thus, more higher education leaders desire to be in the top 5 per cent globally today, whereas they might have been content with the top 25 per cent in 2006. While global rankings dominate the agenda, doing well nationally is also important because this is more likely to affect domestic matters such as resource allocation.

Universities have enhanced their institutional research capabilities. The overwhelming majority I surveyed have formal internal mechanisms to review their rank, usually led by the vice-chancellor, president or rector. Rankings are used, inter alia, to inform strategic decisions, set targets or shape priorities; revise policies and resource allocation; prioritise research areas; change recruitment, promotional or student entry criteria; create, close or merge departments or programmes; and/or merge with other institutions or research institutes.

A growing number use rankings to inform decisions about international partnerships or monitor the performance of peer institutions at home and abroad. Conversely, the league tables influence the willingness of other institutions to partner with them and support their membership of academic or professional organisations.

Despite the criticism, the majority of institutions surveyed continue to believe that rankings are more of a help than a hindrance to their institutional reputation. This apparent schizophrenia arises from the view that being ranked – almost regardless of position – is vital. In the global marketplace, rankings bring visibility.

This is essential because of the growing percentage of undergraduates and postgraduates who have a high interest in the rankings. High-achieving and wealthier students are most likely to make choices based on them. Likewise, international students continue to rate reputation and ranking positions as key determinants in their choice of institution, programme and country, more than, say, institutional websites.

There is little doubt that universities, policymakers, students and other stakeholders have responded – rationally and irrationally – to rankings’ reputed benefits. The league tables’ legacy is manifest in how they have come to define perceptions of quality. Concerns about institutional success and national competitiveness have encouraged an overemphasis on the performance of individual universities in the (mis)belief that national performance is simply the aggregate of “world-class” universities.

Although there are different pathways, the overarching policy paradigm is to create a more hierarchical and differentiated system based on reputation, in which resources are concentrated in a few elite universities that mimic the characteristics of the top 100.

How much attention should universities and governments pay to rankings? Here are Ellen Hazelkorn’s dos and don’ts.

Do

  • Ensure your higher education system/institution has a coherent strategy/mission aligned with national values and objectives
  • Use rankings only as part of an overall quality assurance, assessment or benchmarking system – and never as a stand-alone evaluation tool
  • Be accountable and provide good-quality public information about higher education’s contribution and benefit to society
  • Engage in an information campaign to broaden media and public understanding of the limitations of the rankings.

Don’t

  • Seek to change your institution’s mission or national strategy in order to conform to the rankings
  • Use rankings to inform policy or resource-allocation decisions
  • Direct resources to a few elite universities and neglect the needs of the wider higher education sector and society
  • Manipulate public information and data in order to rise in the tables.

Ellen Hazelkorn, director, Higher Education Policy Research Unit, Dublin Institute of Technology, policy adviser to the Republic of Ireland’s Higher Education Authority, and author of Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.