What do the results of the teaching excellence framework (TEF), published this week, tell us about teaching excellence in UK higher education? From my perspective as chair of the TEF panel, they prompt several considerations. Some bust long-running myths, and others encourage reflection about how teaching can be further developed.
But before I get to this, let’s just reflect on where we have got to. For the first time, we have an overview of teaching excellence that is comparable to the established approach for research. This goes a long way to highlighting the importance of teaching – which provides the majority of income to every university in the country. More important still, it clearly demonstrates the maturity, confidence and excellence of UK teaching provision.
The TEF results offer insights into the student experience and student outcomes in 231 institutions of all types: multi-faculty universities, specialist institutions, further education colleges offering higher education and alternative providers. This is an achievement in itself, and amounts to a compendium of evidence on teaching more detailed than that on any other university system in the world.
Student and staff assessors from across the sector have evaluated the data, and the ratings – gold, silver or bronze – were agreed by a panel of senior academics, with student representation and advice from employers and widening participation experts. Chairing this process has been an extraordinary privilege.
The TEF is rooted in a set of six core metrics, providing data on teaching and academic support, retention and non-continuation and progression to employment. No one has ever claimed that these are the only things worth measuring about higher education, and there have been criticisms of some of the underlying data collection points. But it’s important to remember that the metrics do measure things that matter a lot to students. The approach will be refined – but it is, in essence, right to use metrics in this way.
The metrics were benchmarked for each institution to account for the characteristics of its students and provision. This allowed outcomes to be judged against statistical expectations. Beyond the core metrics, the statistical data were split against a series of student types, looking at issues such as ethnicity, disability, gender and age. Without this benchmarking, it would have been impossible to assess an arts college alongside an agricultural college, or a multi-faculty university alongside a further education college, and thereby to provide ratings specific to the students each one is teaching.
The TEF is a metrics-led, not a metrics-determined, assessment. The metrics were positioned alongside a 15-page submission from the provider being assessed. These submissions were critical. They provided additional context. They explained the policies, practices and culture supporting excellence. The assessors and panel were therefore able to judge internal coherence, clarity of analysis, engagement with the metrics, degree of strategic focus and evaluation of impact.
Headline analysis of the TEF results will inevitably focus on which institutions got which rating and on the pattern of results across different parts of the sector. The ratings have been controversial both in the sector and in parliamentary debate. I am realistic about that, but we do need to remember two things. First, there is no sense in which any TEF ratings constitute “failure”: the awards go above and beyond the already stringent baseline standards for quality in UK higher education. Second, there is evidence from early testing with students that the simple three-tier ratings structure will be welcomed. The experience of judging the ratings will feed into the promised review of the TEF, and it may be that there is some reshaping.
However, the TEF is not, and should not simply be, about the headline results. Deeper analysis casts considerable light on the UK sector. First of all, we can lay to rest the myth that universities do not take their students or their teaching sufficiently seriously. The submissions published alongside the results demonstrate an exceptionally vigorous culture of teaching, kaleidoscopic in its variety.
The TEF panel is unanimous in its view that the exercise has demonstrated two overwhelming things about teaching quality. First, there is no single route to, or template for, excellence. Institutions with a gold rating come from all parts of the sector, with different missions and approaches. Their practices do, however, have some common and compelling characteristics. It is clear that the very best provision genuinely engages students. It takes their interests, needs, aspirations and trajectories seriously and sees them as real partners in the development of teaching, going way beyond instances of student representation.
Similarly, a particularly vibrant feature of UK higher education is its engagement with employers. This is worth saying given the contrary stereotype that often circulates about universities and colleges: the extent to which employers are engaged with teaching development and the range of work-related learning on offer at UK universities are genuinely impressive.
Gold institutions have not only clarified their mission and goals with precision, but have put in place arrangements to realise that mission at all levels. They understand their students in great depth, and use that understanding to shape policy. They ensure that institutional practices work effectively to secure outstanding outcomes for all groups of students. They have a strategic focus on innovation, ensuring that interventions are evidentially grounded and rigorously evaluated. They treat often profound challenges in their context as opportunities to innovate. Above all, they are not complacent about their successes.
Second, the panel was clear that “seams of gold” can be found in many silver and bronze providers. A consequence of the three-level rating system is, perhaps, a focus on the differences between ratings rather than the similarities across them. Universities and colleges are complex organisations, many of them educating tens of thousands of students in a wide range of subjects, full-time and part-time, face to face and at a distance, all the time deploying sophisticated learning technologies. In many cases, the differences between ratings were consequences of strategic coherence and embeddedness rather than sharp differences in practices.
However, the TEF outcomes also highlight areas where we need to do some hard thinking. One is part-time learning. Almost all institutions have some part-time provision, but the outcomes for part-time students are often less clearly captured by metrics. If we want to encourage higher education to shape itself around the needs of part-time learners, which allows them to access higher education in ways that work for them, we will need to look harder at the way we understand their outcomes.
Learning analytics are developing rapidly across the sector. However, the TEF submissions suggest that capacity to run such analytics is more developed than the ability to make practical use of the data. Analytics work most effectively where they genuinely relate to learning and enable strategic, focused interventions to support student success.
Finally, the TEF results give greater weight to long-standing issues relating to the variable achievement of different groups of students. Universities and colleges have invested heavily in widening access, with striking success, but they need to make more progress on ensuring that non-traditional students are successful once they are admitted. While the best submissions demonstrated achievement across all groups of students, there is some distance to go in some institutions to understand and address disparities in performance.
In this respect, higher education is behind the schools sector, which has made massive strides in the past decade in this area. No one should conclude from the TEF that widening participation runs risks for student outcomes, but universities need to supplement their access success by addressing ingrained gaps in achievement.
This has always been described as a trial year for the TEF. As chair, I am perhaps more aware than most that the exercise has its critics. But let’s reiterate the gains: the TEF has shone a light on provision across the sector and identified genuinely exceptional performance. It has highlighted areas where we can and must do better. It has raised the profile of teaching, and sharpened universities’ focus on student outcomes. And it has given prospective students another valuable source of information to guide their decision-making. All of these things are worth having – and we did not have them before the TEF.
Chris Husbands is vice-chancellor of Sheffield Hallam University and chair of the teaching excellence framework.
后记
Print headline: The TEF is not perfect but consider just what an advancement it is