The UK’s percentage scale is unfair and fuels grade inflation

Unequal degree classification ranges mean that improvements get more reward at the higher end, say Andy Grayson, Susannah Lamb and Chris Royle

七月 12, 2021
A man on a tall pile of bricks looks down on one on a smaller pile
Source: iStock

As another assessment season reaches its crescendo in the UK and elsewhere, academics across the land are busy totting up their students’ scores and converting them into degree classifications. But with so much traditional practice up for grabs amid the disruptive effect of the pandemic, it is worth reflecting on whether the system is truly fit for purpose.

A percentage scale is used routinely in UK higher education. By convention, scores between 70 and 100 per cent denote first-class performance, while scores between 0 and 39 are classified as “fails”. So, between them, the first-class and fail ranges account for 70 per cent of the scale.

Yet most students receive marks that are crammed into the remaining 30 per cent. This makes little sense in itself. But the problem goes deeper than that.

Historically, some markers tended to place implicit, subjective ceilings on the marks they were prepared to award. This meant that some students would fall foul of having their work assessed by a “hard marker”. In an effort to combat this, universities (urged on, rightly, by generations of external examiners) have established structures to encourage usage of the full range of available scores.

This often includes specifying the marks that can be awarded for different levels of performance, so that when two different markers make the same criteria-driven judgments about equivalent pieces of work, they award the same score. The “2-5-8 system” is one frequent example. In the upper second-class range, for instance, this stipulates that a 65 designates a “mid” 2(i)-quality piece of work, with 62 and 68 being available for minor up or down adjustments. Translating this into the first-class range would set the top mark at 78 for a “high” first.

But why have a 100-point scale that stops at 78 (or, for that matter, that starts at 30?) That would be fair, but stupid. Instead, what happens is that things get stretched at the top end. Typically, we might see something like 72 being awarded for a “low” first, 80 for a “mid” first, 90 for a “high” first, and 100 for the odd exceptional piece of work.

Let’s leave aside the fact that every university does this differently, such that students performing at the same standard at different institutions will get different marks. The bigger problem lies in what happens within single universities.

Take student A, who is, to date, averaging a high upper second, equating to 68. For their next piece of work, they improve and achieve a high first, for which they receive a 90. All well and good. We must value the improvements that students make in their work.

Meanwhile, on the same course, student B is averaging a high lower second, equating to 58. For their next piece of work, they make a jump in improvement of the same magnitude, to a high upper second. They are awarded 68. Well done them.

These two students have both improved their work in ways that an assessment system should value equivalently. But student A gets 22 extra marks to feed into their degree outcome, while student B only gets 10. The reverse effect happens at the bottom end, with students being over-penalised in ways that do not happen higher up.

This can’t be fair. All students who come to university, whatever their trajectory through a course, should have equal opportunities to learn and improve. A student who is working to the best of their abilities in the middle of the scale, for example, should have access to the same rewards for improved performance as those achieving at the top end of the scale.

The 0-100 scale is so familiar that we tend to look straight through it, its structural problems hidden in plain sight. But it wouldn’t, in our view, survive a proper equal opportunities audit.

One could argue that improvements at the top end of the scale are harder to make, and therefore worthy of greater reward. But that argument is not, as a rule, explicated in assessment systems – and for good reason. It is a rather spurious, after-the-fact justification of an essentially arbitrary decision, made some time in our distant past, to make 70 the threshold for first-class honours.

The large numerical scope for marks to be awarded above that threshold is also, in our view, one of the drivers of grade inflation. The answer to both problems is to change the scale to a linear one. For example, in a 0-16 “grade-point” structure, all steps up and down the scale attract equal reward or cost for improving or worsening performance.

In the examples above, students A and B would both gain three extra points. That’s fair. It also happens to be non-inflationary. If we’re in the business of improving assessment to make it fairer to all students, this would be a good place to start.

Andy Grayson is associate professor of psychology, Susannah Lamb is head of academic quality and Chris Royle is academic standards and quality manager for the School of Animal, Rural and Environmental Sciences at Nottingham Trent University.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (1)

I do not buy the arguments in this article. In my career, I have awarded marks in the 80s a few times and never ones in the 90s (except in straightforward first-year exercises, for example). There have been one or two students in the 1000s that have passed through, who achieved an overall average of over 90 but they are very rare. I know of colleagues who have given dissertation marks of over 90 but the work has then been published in a top journal. So, no grade inflation occurs from the exceptional, brilliant students and the rest of the scale puts students in a reasonable order, which is its function.
ADVERTISEMENT