A university’s experience helping to pilot the subject-level TEF

Participating in the exercise was challenging but worthwhile, says Garrick Fincham

December 2, 2018
subject books
Source: iStock

The Office for Students has just announced the universities and colleges that will participate in the subject-level teaching excellence framework’s second pilot year – and the University of East Anglia is on the list. After taking part in the first year, we did not have to think for long about whether to do it again. Being on the TEF subject-level pilot was hard work, but well worth it.

In the first year, there were two different models being trialled for assessment at subject level – models A and B. Some higher education providers worked on one and some, like us, worked on both.

Partly because of this, taking part was a significant commitment of both academic and professional services’ time, and the compressed timeframes of the pilot required a great deal of flexibility from all involved. Although we walked into that with our eyes open, it meant that some key aspects – consulting with students, for example – were very challenging.

At the same time, we learned very quickly that the desire to pilot was a genuine one – what we were being asked to do often changed as a result of consultation meetings. We had to be willing to adjust our approach on multiple occasions, sometimes mid-activity, as guidance was clarified and survey instruments were tweaked.

ADVERTISEMENT

More specifically we had numerous opportunities, both in discussions with the TEF team at the OfS, and with other institutions taking part, to unpick, understand and challenge a complex methodology, to get a clear idea of what a submission should contain, and to consider different alternatives for its structure.

We gained critical experience in organising ourselves, in mobilising both academic and support staff in a large project that spread across the university, and in creating a coherent sense of what we were working towards.

ADVERTISEMENT

There were also the glitches along the way, the thoughts we had about what not to do next time, and suggestions from other institutions. But it was all part of understanding how the process would work for real.

Model A was a “by exception” model that started with a provider-level rating, applying it to subjects but giving individual subjects fuller assessment (and potentially different ratings) where metrics performance differed.

Model B was a “bottom-up” model that fully assessed each subject to give subject-level ratings, feeding up into the provider-level rating.

Doing the two methodologies side by side certainly increased the logistical complexities. However, this drew out for us the fact that the work to create a real sense of narrative, subject by subject, was valuable in itself.

ADVERTISEMENT

And the difference in effort between writing a longer “by exception” narrative for Model A, versus the broader model B narratives, was relatively slight once we understood what needed to be done anyway to get such a project running.

The complexity of the model A system added work – this was seen in the need to understand, translate and explain to a wide audience why subjects were rated as they were.

The more broad-brush model B posed complexities of its own, such as: how can you do justice to an academic discipline in very limited space? Or how do you talk coherently about grouped subjects that may not actually be that cognate?

ADVERTISEMENT

As pilot institutions we were clear about the downsides of both models and it is gratifying to see that the model for the second year of the pilot took this into account and is neither of these models. It will be interesting to see how the revised model – a comprehensive assessment, which we hope will prove to retain the best features of both – is received by pilot institutions this time around.

It was, however, a major learning opportunity, and we have no doubt that it has benefited our readiness for TEF in numerous ways. We have looked hard at ourselves and had conversations that we would not have had otherwise. We engaged in a dialogue around creating the TEF subject submissions that made us think in different ways. In a way, in fact, it stopped feeling wholly like a pilot and more like a genuine exercise in self-reflection.

Any doubts I had about the value of participating vanished a few months ago when I was at an event with colleagues who had not been pilot members. TEF had not been as much of a priority for them, and it made me realise how much we had evolved our thinking, from the first read-through of the specification documents to the point of submission.

I hope also that we made a positive contribution to the process as a whole and the evolution of subject-level TEF. If so it was down to a lot of hard work, and the support of colleagues at the OfS, but also, crucially, the keen and supportive sense of dialogue between all the universities and colleges involved. I look forward to a similar experience this year.

ADVERTISEMENT

Garrick Fincham is head of planning at the University of East Anglia.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored

ADVERTISEMENT