How to build the teaching excellence framework

Nitpicking the TEF to pieces would be a mistake, says Derfel Owen. Far better to engage – and here’s how

July 10, 2015
Workmen working on building foundations
Source: 1000 Words/Shutterstock.com

Jo Johnson, the new minister for higher education, surprised nobody last week by asserting his determination to deliver the Conservative Party’s manifesto commitment to “introduce a framework to recognise universities offering the highest teaching quality”. This will now be known as the teaching excellence framework.

He did not go into a great deal of detail in his speech about what the framework would look like or what precisely the recognition would be for high-quality teaching, although he did indicate that he was open to considering financial incentives and rewards.

In fact, the level of openness and willingness to listen and respond to what the sector thinks was quite striking and positive. I think it is imperative that the sector engage positively with the Department for Business, Innovation and Skills over this, because I’m not sure it is an initiative where a strategy of kicking it into the long grass or hoping for death by a thousand cuts will work.

I think a key consideration in developing the TEF ought to be that it is light touch. I’m sure all universities that participated in the research excellence framework would want to avoid replicating the bureaucratic burden involved in that. The TEF should be based on metrics, ones that already exist or could be gathered with relative ease.

ADVERTISEMENT

Johnson was also clear about this in his speech when he said: “I have no intention of replicating the individual and institutional burdens of the REF. I am clear that any external review must be proportionate and light touch, not big, bossy and bureaucratic.”

Based on that principle, I think the data should fall into three themes: input, output and peer judgement.

ADVERTISEMENT

1. Input

By input, I mean the quality of the content and delivery of teaching and wider academic experience of students.

There are a number of data that could be used, or newly collected, to inform this; but I think we have one already in the bag and a couple of others that could be gathered quite easily if we try hard enough.

  • Qualified teacher status: the Higher Education Statistics Agency has already been gathering data on this. I know it’s not a measure that the whole sector has united behind yet, but with one more push and some finessing of the Higher Education Academy’s Professional Standards Framework, we could get some sector-wide, comparable data. There will be plenty of wailing and gnashing teeth about this I am sure, but it is notable that this came top of student expectations when the Higher Education Policy Institute surveyed them recently – we should meet that expectation, it won’t do anyone any harm!
  • Research impact: I was trying to think of a way of quantifying the fact that students should have the opportunity at university to learn and understand the very boundaries of knowledge and understanding in their chosen subject and to learn from those involved in defining and discovering the new boundaries of knowledge and understanding. The REF scores would tell some of this story, but it occurred to me that the impact scores are probably a more effective and targeted way of measuring the ability to communicate and engage students with world-leading research.
  • Enrichment: it is widely accepted that a student’s academic experience is defined not only in the classroom, but also by the co-curricular activities that are available. We are already engaged in an initiative to gather all this data to inform the Higher Education Achievement Record (HEAR). Be it work placements, study abroad, community engagement, leadership of sports clubs or academic representation, these activities improve student academic development and make a significant contribution to an excellent learning and teaching experience. It would take some work to finesse our systems to gather this data, but it can be achieved and should make an especially useful feature of a TEF.

2. Output

By outputs, I mean demonstrable measures of students’ gaining knowledge and skills during their studies. I think there could be three datasets that could be used here, two that we gather already and one that could be derived from existing data.

  • Learning gain: this term means many different things to different people. As a school governor, I have become familiar with the concept of “value added”, where a contextual measure of a student’s knowledge, skills and ability is taken on entry and then progression is assessed from that at various points up to a pupil leaving the school. I am not at all convinced that a measure that looks at Ucas tariff through to degree classification is the most effective measure of learning gain, mostly because of all the problems that we know exist with the comparability of degree classifications. I think a measure should be developed that looks at the entry qualifications that students arrive with and then how far along they get with their higher education. That way, universities that recruit students with no or few qualifications could be rewarded and could see the recognition increase as students achieve higher-level qualifications. For example, a student with no qualifications attending a university and achieving a Level 5 HND would have travelled the same distance as a student with the traditional A levels who then achieves a Level 6 BA (Hons).
  • Employment data: we currently have the Destination of Leavers from Higher Education survey, which captures reliable data about what students are doing six months after graduation. There is talk of improving this to use HMRC data to show the real earnings of student at one, three, five and 10 years post-graduation, which would be a positive development I think. Either way, this is a key piece of data to show what students are capable of doing and achieving after graduation and should be included in any measure of teaching quality.
  • Grades: as long as we have the old honours classification system, the data are not comparable. But if a concerted effort is made to move to the grade-point average system, we might achieve more comparability and be able to develop data that reflect and measure that.

3. Peer review

A purely data-driven approach would not give sufficient space for input from peers. I include students as part of the peer community here because I feel strongly that we should consider their feedback and views of the quality of teaching to be of equal, if not greater, value than that of academic peers because, above all else, students are the only people who can tell us what it is like to be a student.

ADVERTISEMENT
  • The National Student Survey (extended to postgraduates): I’m pleased that the recent review of the NSS appears to have concluded that the survey is immensely valuable and not in need of a radical overhaul. However, it needs refreshing to sort out some of the dated and more nebulous questions (personal development, anyone?!). So this survey and its outcomes should undoubtedly form an important part of the TEF; I would argue that it should be heavily weighted in comparison to other metrics, too.
  • External review: I am not advocating a return to subject review or inspection regimes of old; they ran out of steam more than 15 years ago, and everything we learned about them then still stands (burden, bureaucracy, gaming the system, diminishing quality of the “inspectors/reviewers”). But I think a positive judgement from the university’s most recent Quality Assurance Agency institutional review should be a prerequisite for inclusion in the TEF, to demonstrate that core academic standards are being maintained and to show that the teaching is built on solid foundations.

This is an attempt by me, from my perspective, to list what should be included in a teaching excellence framework. Each of the individual sets of data listed above can (and I’m sure will) be picked off on its own and rubbished as an insufficient measure of teaching quality. This may be true: for example, the DLHE only tells you if students get jobs, it doesn’t tell you if they would have got them anyway or whether their degree actually helped them. But nested in with a number of other datasets (some that we gather already, some that we need to start gathering) it can start to piece together a rich picture of the quality of teaching and learning.

This approach of picking apart the minutiae has been very successful for the sector in the past and it may succeed again, but I am doubtful. We have an opening to engage positively and constructively with this initiative and to shape it; if we try to nitpick and shower the whole thing in treacle, I suspect we’ll get the sort of TEF we deserve!

Derfel Owen is director of academic services at UCL. This post was first published on his blog page.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Related universities

Reader's comments (2)

You are correct that each data set you suggest can and will be rubbished -- because they are rubbish. A meta analysis of worthless data will not help us. The fundermental problems with using employment data (notably that social capital is a major factor regardless of teaching quality) will not go away because you compare the data to the equally and notoriously flawed NSS. Basically, you are suggesting that we dilute crap with crap. Sorry, but no. Of course, rather than spending money and time hand-wringing about statistics we all know are worthless, would student satisfaction and teaching not improve if more money actually went into hiring teaching staff? More courses would be on offer, module sizes would be reduced, staff student ratios would improve, and existing teaching staff would be less overworked. While the idea of a TEF may look great from the position of university managers, who seem to collectively and pathalogically obsessed with monitoring and micromanaging academics yet are rarely at the coal face of either research or teaching -- for the rest of us this is the stuff of nightmares. PS. Academics will have no opportunity for serious enagement. Any enagement that occurs will be on the managerial level. Long gone are the days when VCs and PVCs actually bother listening to working academics.
Derfel - thank you - whilst the rest of us are still on the floor, reeling from the HEFCE consultation and the Minister's Speech on the TEF (and the proposed reincarnation of CNAA) you're back up on your feet swinging punches - but, is this about teaching or learning? Or both? If teaching, is it about the quality of what is delivered or the way in which it is delivered or both? What is teaching excellence? It needs defining in order to ascertain how to measure it - if it is demonstrating the best possible knowledge delivered in the best possible way taking account of diversity in the student body, then we can identify appropriate measures. If learning, then what is the quality of the learning resulting from teaching? Excellent teaching does not necessarily equate to excellent learning – how do we observe the process of learning in order to measure excellence? Are we evaluating teaching in terms of the learning? Are we measuring individuals, departments or institutions? Whatever we do, we need to be cognisant of the limitations - can students effectively evaluate teaching? Peer observation of teaching is only a snapshot, not a long-term evaluation and so on...

Sponsored

ADVERTISEMENT