150 Under 50 Rankings 2016 methodology

The rankings tables give less weight to reputation, a measure often associated with past glories

四月 6, 2016
Times Higher Education 150 Under 50 Rankings 2016 methodology
Source: Peter Grundy

Browse the full list of the world's top 150 universities under 50 years old


The Times Higher Education 150 Under 50 Rankings 2016 applies the same 13 performance indicators as the THE World University Rankings to provide the most comprehensive and balanced comparisons, which are trusted by students, academics, university leaders, industry and even governments. The performance indicators are grouped into five areas:

  • Teaching
    The learning environment
  • Research
    Volume, income and reputation
  • Citations
    Research influence
  • International outlook
    Staff, students and research
  • Industry income
    Innovation

However, to fit the 150 Under 50’s focus, reputation counts for less, as befits a study of more recent arrivals on the academic scene.

Exclusions
Universities are excluded from the 150 Under 50 if they do not teach undergraduates or if their research output amounted to fewer than 1,000 articles between 2010 and 2014 (200 a year).

Data collection
Institutions provide and sign off their institutional data for use in the rankings. On the rare occasions when a particular data point is not provided – which affects only low-weighted indicators such as industrial income – we enter a low estimate between the average value of the indicators and the lowest value reported: the 25th percentile of the other indicators. By doing this, we avoid penalising an institution too harshly with a “zero” value for data that it overlooks or does not provide, but we do not reward it for withholding them.

Getting to the final result
Moving from a series of specific data points to indicators, and finally to a total score for an institution, requires us to match values that represent fundamentally different data. To do this we use a standardisation approach for each indicator, and then combine the indicators in the proportions indicated below.

The standardisation approach we use is based on the distribution of data within a particular indicator, where we calculate a cumulative probability function, and evaluate where a particular institution’s indicator sits within that function.

For all indicators except for the Academic Reputation Survey we calculate the cumulative probability function using a version of Z-scoring. The distribution of the data in the Academic Reputation Survey requires us to add an exponential component.

Story of the ages
For the 150 Under 50 Rankings, it was decided that the foundation date of a university shall be either:

  • The year it was founded (if it was purpose-built as a university)
  • The date it attained degree-awarding powers (if the institution has changed status from another type of institution).

Much editorial judgement has been applied to take account of individual circumstances. Here we outline our approach to some common scenarios.

Mergers
The date of foundation is not the date of the merger or the renaming of the institution, but the date of the foundation of the dominant component in the merger. If that is unclear, we take the foundation date of the oldest component of the merger.

Demergers and spin-offs
If an institution that was once a campus or branch of a larger institution has become independent, we use the foundation date of the parent institution. If a merger is combined with an expansion to create a new institution, local feedback is used to make a judgement.

Institutions without degree-awarding powers
If the institution lacks degree-awarding powers (if it is, say, part of a larger federal system or group of universities such as the University of London in which the parent system awards degrees), then the foundation date is either: the date the institution was founded, if purpose-built as a university; or the date the first degree was awarded if it has changed from non-university status.


Teaching (the learning environment): 30%

  • Reputation survey: 10%
    This category includes the results of the Academic Reputation Survey carried out from December 2014 to January 2015. It examined the perceived prestige of institutions in teaching. The responses were statistically representative of the global academy’s geographical and subject mix.
  • Staff-to-student ratio: 6%
  • Doctorate-to-bachelor’s ratio: 3%
  • Doctorates awarded to academic staff ratio: 8%
    As well as giving a sense of how committed an institution is to nurturing the next generation of academics, a high proportion of postgraduate research students also suggests the provision of teaching at the highest level that is attractive to graduates and effective at developing them. This indicator is normalised to take account of a university’s unique subject mix, reflecting that the volume of doctoral awards varies by discipline.
  • Institutional income: 3%
    This measure of income is scaled against staff numbers and normalised for purchasing-power parity. It indicates an institution’s general status and gives a broad sense of the infrastructure and facilities available to students and staff.

Research (volume, income and reputation): 30%

  • Reputation survey: 12%
    The most prominent indicator in this category looks at a university’s reputation for research excellence among its peers, based on the responses to our annual Academic Reputation Survey.
  • Research income: 9%
    Research income is scaled against staff numbers and normalised for purchasing-power parity. This is a controversial indicator because it can be influenced by national policy and economic circumstances. But income is crucial to the development of world-class research, and because much of it is subject to competition and judged by peer review, our experts suggested that it was a valid measure. This indicator is fully normalised to take account of each university’s distinct subject profile, reflecting the fact that research grants in science subjects are often bigger than those awarded for the highest-quality social science, arts and humanities research.
  • Research productivity: 9%
    We count the number of papers published in the academic journals indexed by Elsevier’s Scopus database per academic, scaled for a university’s total size and also normalised for subject. This gives an idea of an institution’s ability to get papers published in quality peer-reviewed journals.

Citations (research influence): 30%

Our research influence indicator looks at universities’ role in spreading new knowledge and ideas.

We examine research influence by capturing the number of times a university’s published work is cited by scholars globally. Elsevier examined more than 51 million citations to 11.3 million journal articles, published over five years. The data are drawn from the 23,000 academic journals indexed by Elsevier’s Scopus database and include all indexed journals published between 2010 and 2014. Citations to these papers made in the six years from 2010 to 2015 are also collected.

The citations help to show us how much each university is contributing to the sum of human knowledge: they tell us whose research has stood out, has been picked up and built on by other scholars and, most importantly, has been shared around the global scholarly community to expand the boundaries of our collective understanding, irrespective of discipline.

The data are fully normalised to reflect variations in citation volume between different subject areas. This means that institutions with high levels of research activity in subjects with traditionally high citation counts do not gain an unfair advantage.

This year we have removed the very small number of papers (649) with more than 1,000 authors from the citations indicator. In previous years we have further normalised citation data within countries, with the aim of reducing the impact of measuring citations of English language publications. The change to Scopus as a data source has allowed us to reduce the level to which we do this. This year, we have blended equal measures of a country-adjusted and non-country-adjusted raw measure of citations scores. This reflects a more rigorous approach to international comparison of research publications.

International outlook (staff, students, research): 7.5%

  • International-to-domestic-student ratio: 2.5%
  • International-to-domestic-staff ratio: 2.5%
    The ability of a university to attract undergraduates, postgraduates and faculty from all over the planet is key to its success on the world stage.
  • Research: 2.5%
    In the third international indicator, we calculate the proportion of a university’s total research journal publications that have at least one international co-author and reward higher volumes. This indicator is normalised to account for a university’s subject mix and uses the same five-year window as the “Citations: research influence” category.

Industry income (innovation): 2.5%

A university’s ability to help industry with innovations, inventions and consultancy has become a core mission of the contemporary global academy. This category seeks to capture such knowledge transfer activity by looking at how much research income an institution earns from industry, scaled against the number of academic staff it employs.

The category suggests the extent to which businesses are willing to pay for research and a university’s ability to attract funding in the commercial marketplace – useful indicators of institutional quality.

Claim a free copy of the full 150 Under 50 Rankings 2016 digital supplement

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT