With the teaching excellence framework (TEF) results now released for all the world to see, the inquest within universities into the results will begin in earnest.
Many of those marked bronze will want to know where they went wrong, the universities with silver will be keen to understand if they were close to gold and those at the top of the tree will want to hold on tight to their rating.
Central to their initial understanding will be how each institution did on the six core metrics underpinning the TEF, and upon which initial decisions were made about universities by the assessors.
From there, the assessors would have looked at contextual information like performance in metrics for different social groups and a university’s written submission. But the core metrics give a first flavour of how institutions in the same award category compare.
Times Higher Education has run the numbers on these metrics and below is a (sortable) table of institutions ranked first by final TEF award but then by performance in the metrics (see below the table for a fuller explanation of the methodology).
Source: Hefce/Analysis by THE. Note: Table is based on core metrics for an institution's majority provision (full-time or part-time). Three of BPP University's metrics did not return a flag as there were not enough data to form a benchmark. Table updated on August 25 to reflect an upgraded award for the University of East Anglia.
In each metric (teaching on my course, assessment and feedback, academic support, non-continuation, graduate employment or further study and highly skilled employment or further study), institutions were given a flag if they performed significantly well or badly (in a statistical sense) compared with a benchmark value.
An institution could have achieved one of five different flags in each metric: ++ (if they performed particularly well against the benchmark), + (if they performed well), = (if there was no statistically significant difference from the benchmark), - (if they performed badly against the benchmark) and -- (if they performed particularly badly).
THE counted the number of times an institution achieved a flag in each category (six being the maximum) and then sorted the final table according to TEF award, then flag performance and finally by average Z-score across the six metrics. A Z-score is a numerical value that expresses how far the institution deviated from the benchmark in a particular metric.
Who were the movers?
Thanks to work by THE data scientist Billy Wong, the analysis also means it is easy to identify which institutions would have been given an initial TEF assessment, based on the core metrics, that was different to the final outcome.
According to the official TEF guidance document, "when looking at the delivery mode in which providers teach the most students", an institution "with three or more positive flags (either + or ++) and no negative flags (either - or - - ) should be considered initially as Gold" while an institution "with two or more negative flags should be considered initially as Bronze, regardless of the number of positive flags". All other institutions were to be initially be considered as silver.
The following are institutions that based on this test would have been given a different initial outcome than the final decision, which also took into account other contextual factors like performance on "split metrics" (data relating to particular groups of students) and the written submission from a university.
TEF core metrics: Z-scores
Below is a full sortable table of Z-scores in each core metric by institution.
Source: Hefce. Note: Three of BPP University's metrics did not return a Z-score as there were not enough data to form a benchmark. Data in some metrics for the Royal College of Music and British School of Osteopathy are missing for data protection reasons. Table updated on August 25 to reflect an upgraded award for the University of East Anglia.
Find out more about THE DataPoints
THE DataPoints is designed with the forward-looking and growth-minded institution in view