‘Bizarre’ TEF metrics overlook so much about teaching excellence

There are problems with the selection of teaching excellence framework metrics, argues Paul Ashwin

June 7, 2016

How do we develop valid metrics of teaching excellence? This is the question that the Higher Education White Paper tries to answer through the teaching excellence framework (TEF).

It is a difficult question because metrics, in the form of university rankings, usually tell us more about the history and prestige of universities than the quality of teaching that students experience.

The reason it is such an important question, rather than simply a difficult one, is that the ways in which quality is measured ends up defining quality. In much the same way as students focus on the aspects of their curricula that are assessed, universities will focus on those elements of the TEF that are formally measured.


Read next: Great teaching cannot be captured by ‘dangerous’ algorithms


The White Paper and its associated technical consultation outline the metrics that will be used in year two of the TEF. These will be students’ views of teaching, assessment and academic support from the National Student Survey, drop-out rates, and rates of employment and further study from the Destinations of Leavers from Higher Education Survey (DHLE).

ADVERTISEMENT

On the face of it there is some logic to these. It is difficult to imagine an excellent course in which the students think that the teaching, support and assessment are hopeless; a large proportion of the students leave; and hardly anyone gets a job or a place on a postgraduate course at the end of it.

The way the metrics will be used is also, in my opinion, a significant improvement on university rankings. This will take account of the differences in student intake and thus address the problem of prestigious institutions being rewarded simply for selecting already privileged students rather than the quality of their teaching. Similarly, the commitment to flag statistically significant differences is a vast improvement on university rankings in which differences of tens of places are usually meaningless.

ADVERTISEMENT

However, there are problems with the process and the selection of metrics that has been made.

Given that these metrics will end up defining quality for the sector, one would expect their selection to be a matter of fierce and public debate. But the process has been extremely opaque. The commissioned Office for National Statistics report makes it clear that they were told which metrics to evaluate. This selection was apparently underpinned by two reports, neither of which makes any recommendations as to which metrics should be used.

The obvious implication is that these metrics were selected because the data are available to produce them. There is also no mechanism outlined for a sector-wide discussion of the development of future metrics. This suggests that the selection of metrics is likely to remain a "technical" matter, in which technical is a synonym for "behind firmly closed doors".

The suggestions about which metrics will be developed in the future are also worrying.

There is a strong hint that a metric on "teaching intensity" will be developed, yet years of research have shown that the hours that students are taught does not directly relate to the quality of what they learn. Meanwhile, factors that we know are good indications of teaching excellence are not even mentioned.

ADVERTISEMENT

For example, it is bizarre that we have purported measures of teaching excellence that tell us nothing about the expertise of those who teach or about how successfully students’ gain access to knowledge. Similarly, we know that students’ experience in their first year experience is crucial in shaping what they gain from their engagement in higher education, and yet the focus on the proposed metrics is mainly on students’ experiences in their final year and after graduation.

These would be more serious problems, but the ONS report suggests that it is unlikely to be the metrics that play greatest role in shaping TEF judgements. The report is very clear that the differences between institutions’ scores on the selected metrics tend to be small and not significant. This means that the majority of the weight of judgement about which institutions will be labelled "Meets Expectations" (or "bog standard", in politicians’ parlance), "Excellent", and "Outstanding" will be on the 15 pages of additional evidence provided by institutions.

Thus an exercise that was heralded as being metrics driven will in fact be decided by peer review.

ADVERTISEMENT

The ONS report also strongly hints that the data underpinning the metrics will not be robust enough to support a future subject level TEF. This is another serious blow to the TEF’s claim to provide valid information to prospective students because the quality of programmes in a single university tend to be as variable as the quality of programmes between institutions.

It will also mean that students’ fees are not directly related to the quality of the course they are studying.

The final irony is that while a central rationale for the TEF is that students have the right to know where excellence exists, the technical consultation implies that the Department for Business, Innovation and Skills already knows the proportions in which excellent, outstanding and bog standard teaching exists across the sector. Given this Jedi-like ability to sense the presence of excellence, perhaps there is no need for the TEF after all.

Paul Ashwin is professor of higher education in the Department of Educational Research at Lancaster University, and a researcher in the Centre for Global Higher Education.

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored

ADVERTISEMENT