Teaching metrics: what on earth will anyone learn from them?

An aggregated university score would be designed to serve government bean counters, not students, says Athene Donald

December 1, 2015

In the summer, metrics looked like they had been substantially laid to rest for the higher education sector: The Metric Tide report, written at the behest of the (probably about-to-be-late-lamented) Higher Education Funding Council for England (Hefce), took many pages to point out that metrics alone would never be a good way of assessing excellence in our universities.

Whether you care to use h indices, citations, journal impact factors, or pounds per anything, metrics have their limitations. Indeed, having read the report it is hard not to think that the misery of the research excellence framework (REF) can’t have been entirely bad because there was at least some sense of people making judgements based on their professional expertise, not slavishly following one number or another (with all the perverse incentives such figures can induce).

However, the government still wants to measure us, still wants to be sure their money is not “wasted” and that students as consumers are not being short-changed. So assessment by metric continues to be very much in the air, whether in the context of teaching or research.

I find this deeply dispiriting. If the government thinks they’re going to do things in a “light touch” way as regards teaching, that is bound to mean some sort of metrics. However, if universities are going to be scored across all their courses with a single figure, what on earth is anyone going to learn from this? How will a physics course with a heavy load of laboratory work involved be turned into any sort of equivalence with a course in languages with a year abroad? If my department happens to irritate all of its students for some reason, how will that be factored into a score for the whole university? Are a lot of mediocrely OK courses across an entire university to be rated better than a university which offers a mixed bunch of high and low scores?

ADVERTISEMENT

I can see no way in which an aggregated figure can be meaningful to a student – but then it’s really designed to serve a government bean counter deciding whether or not the tuition cap can be raised. (Which means that’s a rather important government bean counter.)

As a Cambridge academic I have a more specific worry about how the supervision system will be factored in. This small-group teaching (typically two to three students with a teacher) is very effective at ensuring all students get to try out their ideas and get personal feedback on whether these were right or wrong. It is very difficult for any student to sit quietly for an hour and never utter, as only too often can happen in larger groups. However, such teaching is frequently arranged through a college not the university; some of the teaching will be by professors, some by PhD students.

ADVERTISEMENT

The quality, inevitably, will be variable – and quite likely to be uncorrelated with seniority or any other obvious metric. So, how would any department “score” such teaching when they have no direct control over it? And how would this then be added in to the entire university figure, given that we are a collegiate university?

Of course, one could avoid all such nasty reality by simply using the National Student Survey. As many voices have pointed out, a satisfied student is not the same thing as a well-educated student. Referring once again to the burden of laboratory work, we know it’s not loved by many students but, if they never get familiar with basic skills and equipment, how can they do more interesting stuff such as research? So, a well-educated student may indeed be a somewhat unhappy student who can’t yet appreciate why they were taught some specifics. A satisfied student may be the one who gets access to all the overheads and never has to turn up to a 9am lecture at all. So, I hope the survey scores will not trump any other way of determining whether teaching is “good”.

Aside from government bean counters, the other people who really care about the scores are those who devise – and utilise – league tables. Most universities, including the University of Poppleton, can find some appropriate measure by which they are in the top 10. Such league tables have proliferated. Many of them lack transparency or their figures of merit are not particularly helpful. By relying heavily on these we are in danger of distorting what we do and how universities present themselves.

Should universities fight against the use of numbers that do not ultimately tell us – or students or the taxpayer – about quality in any robust way? I think the answer must be yes but, as with the whole idea of successive research assessment exercise/REF assessments I fear it is a losing battle. Those individuals and institutions which have signed up to the Declaration on Research Assessment (Dora) should be sure to push that its recommendations are implemented at least internally. No more h-index comparisons at promotion panels, or decisions to exclude individuals from a shortlist because of the journal impact factors of their publications.

ADVERTISEMENT

In practice, I have seen the former half-heartedly operate and I guess we should be doing more. My own field, with its wildly different publishing strategies – potentially ranging from hundreds of authors in high energy physics to the lone author theorist – demonstrates clearly the importance of avoiding crude criteria. (And if you look at the list of universities signed up to Dora and don’t spot Cambridge, I am assured we are signed up through our membership of the League of European Research Universities (Leru) which is a signatory. I do worry how many of my colleagues are aware of this. For the record, I specifically enquired about this but I am also an individual signatory.)

As the dust of the chancellor’s Autumn Statement settles, as the Department for Business, Innovation and Skills escapes comparatively unscathed despite the bullish noises for cuts emanating from Sajid Javid in advance of the statement, we still have to worry as a sector what the price we will have to pay may be in terms of metrics since they look like such a convenient and cheap strategy to implement. I hope the original committee, chaired by James Wilsdon, that carefully looked at the evidence and spelled out the limitations of any metric-based measure, will stand by its conclusions.

Dame Athene Donald is professor of experimental physics, University of Cambridge, and master of Churchill College, Cambridge. She is the university’s former gender equality champion. This post first appeared on her blog.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Reader's comments (1)

One of the main problems is that (successive) Government policy on higher education (and education generally) has invariably been formed around policy-based evidence rather than evidence-based policy.

Sponsored

ADVERTISEMENT