The World Reputation Rankings 2025 will be published on 18 February.
The Times Higher Education World Reputation Rankings are constructed using data collected from the world’s largest invitation-only academic opinion survey – a unique piece of research.
The methodology for this year’s ranking has been updated to increase the number of methods used in assessing the reputation of academic institutions.
Previously vote counts from the THE Global Academic Reputation Survey were used as the singular method for determining ranking position. Given that the subject of reputation is gaining a wider audience among the academic community, this year’s ranking includes significant updates to previous iterations.
Three pillars, formed from six underlying performance indicators, assess reputation for both research and teaching.
The full methodology is published in the file at the bottom of this page.
Key criteria for Times Higher Education World Reputation Rankings 2025
Three core pillars of evaluation
- Vote counts: evaluates number of votes for research and teaching
- Pairwise comparison: invites voters to consider a wider range of institutions
- Voter diversity: rewards universities that receive votes from a wide range of territories and subject areas
Indicator weightings for World Reputation Rankings 2025
Performance indicator breakdown
The indicator weightings for the World Reputation Rankings 2025 are displayed in the below table.
Pillar | Indicator | Weighting (%) |
Vote count | Research vote count | 30 |
Teaching vote count | 30 | |
Pairwise comparison | Research pairwise comparison | 10 |
Teaching pairwise comparison | 10 | |
Voter diversity | Research voter diversity | 10 |
Teaching voter diversity | 10 |
Performance indicator definitions
Vote counts
This is the core method of determining performance that has been employed in the reputation ranking since its launch. Vote counts are continued this year but with a modification to the scoring function.
Previously the score was derived as the proportion of votes that the top institution received. Because of the nature of the underlying distribution, this meant that scores attenuated rapidly such that most universities in the ranking had very low scores. This year we move to a cumulative scoring function.
While this will not fully alleviate the sharp drop-off in vote scores, it does flatten the score curve and allow more meaningful comparisons both within this year and year-on-year for future iterations of the ranking. It will also mean that the scoring for reputation uses a method similar to that used for the World University Rankings, ensuring consistency across different THE rankings.
Pairwise comparison
The vote count method above allows respondents to select any university through a text search box, such that invitees can freely vote for any university that comes to mind. Historically, however, this has meant that the “super-brands” (which are persistent in terms of strong reputational performance) dominate the results data.
In pairwise comparison, universities are preselected and respondents place these in order from 1 to 5. This method can be used to encourage voters to consider those institutions that are not in the super-brands that dominate the top of the ranking. While this doesn’t mean that suddenly voters will stop rating the top institutions so highly, it does ensure that each respondent is forced to consider certain institutions that are further down the ranking scale.
Voter diversity
For universities with similar numbers of votes, which additional measures can we employ to assess reputational performance? In this metric we examine voter diversity. Here we work on the view that institutions that have wide respondent bases have a stronger reputation than those that don’t.
In this metric, an institution with votes coming from a wide range of countries and territories (and subject areas) is deemed to have a more robust reputation than one where votes originate from a small number of countries and/or subjects. This measure provides an additional way for universities to differentiate themselves from others, not just on how many votes they receive but on the composition of their respondent base.
Data collection
The THE Global Academic Reputation Survey, available in 13 languages, was sent to a sample of academics selected by THE, in which we asked them to nominate the universities that they perceive to be the best for research and/or teaching in their field.
The questionnaire, which is run in-house by THE, targets only experienced, published scholars, who offer their views on excellence in research and teaching within their disciplines and at institutions with which they are familiar.
Academics were asked to nominate up to 15 institutions for research and up to 15 institutions for teaching. Questions are also asked to order five universities, the names of which are supplied to each respondent based on their research history. There are also questions about a respondent’s demographics, such as their area of specialisation and country or territory in which they are based.
The most recent Global Academic Reputation Survey (run annually by THE) was carried out between November 2023 and January 2024 and received more than 55,000 responses.
We have run the survey to ensure a balanced spread of responses across disciplines and countries. Where disciplines or countries were over- or under-represented, THE’s data team weighted the responses to ensure the results fully reflect the global distribution of scholars, using data from Unesco.
The survey data will be used alongside 16 indicators to help create the THE World University Rankings 2026, to be unveiled in October 2025.
Self-voting and voter concentration
In 2023 we introduced a self-voting cap. This reduces the self-vote share to 10 per cent of the total votes for any given university. Self-votes are still allowed and are included, but are weighted down in much the same way as we apply country and subject weightings. The majority of ranked institutions are unaffected by this adjustment.
While employing a self-voting cap addresses intra-university voting, it won’t deal with arranged voting relationships between institutions. This year we have implemented an additional measure where we look at vote concentration to help deal with any potential cases of this issue.
When we look at the number of different institutions that vote for a particular university, we see that generally universities have a broad range of respondents. However, should any institutions be part of a closed ring, this would be reflected in a much narrower spread of voters. This is represented by a high number of votes-per-respondent-institution (VPRI) for a given university.
When this happens, we can set a maximum threshold value for VPRI and adjust vote weights accordingly, in much the same way we dilute votes for the self-voting adjustment above. This treatment is applied fairly across the entire survey dataset, and our analysis shows that this affects only a very small number of universities.
Rankings table
A total of 300 universities are ranked, up from 200 in the previous edition.
Precise ranks and overall scores are shown for the institutions ranked in the top 100. The subsequent institutions are assigned to the following bands: 101-150, 151-200 and 201-300. Precise individual pillar scores are displayed for each ranked institution.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login