Jumping through hoops on a white elephant: a survey signifying nothing

Shallow, costly, widely manipulated and methodologically worthless: ex-HEA director Lee Harvey on a 'laughable' National Student Survey

June 12, 2008

The National Student Survey is rapidly descending into a farce. This is not entirely unpredictable.

As has been shown in other spheres, such as external quality assurance, institutions and academics are good at manipulation. While quality assurance is flexible and can be adapted to minimise the game playing, the NSS is a simplistic device that is easy to outmanoeuvre. Times Higher Education has highlighted the cases of London Metropolitan University and of Kingston University, two cases where the blatant attempts at manipulation have come into the public gaze. They are not alone. Rumours abounded well in advance of these revelations that various institutions were encouraging positive student responses, and the reactions to the Kingston case and confessions from other institutions are testament to the fraudulent nature of the NSS.

However, it is not just the fact that people cheat, for understandable motives, that renders the NSS next to useless. It purports to be a universal indicator of student views and is reported as a satisfaction survey, despite vehement claims when it was established that it would not be about satisfaction. But then, similar claims were also made about its not being used as a ranking tool. The NSS is a pseudoscientific tool purporting to be reliable on the spurious psychologistic grounds that there is some statistical congruence between the responses on a small group of agree-disagree questions around a common topic. These so-called scales purport to measure such things as "assessment" and "personal development". As long ago as the early 1960s, the eminent quantitative sociologist from Columbia University, Paul Lazarsfeld, a proponent of quantitative scales, demonstrated the purely pragmatic process of scale-item selection. His seminal thesis on the "interchangeability of indicators", ignored by the creators of the course evaluation questionnaire on which the NSS is based, showed that more or less any question phrasing on a broad topic would result in so-called statistical reliability.

The key issue is not reliability but validity. In this, the NSS is sadly lacking. Validity is about the questions actually providing indicators of something worthwhile, not something statistically comfortable. The NSS is at best a compromise, with a set of bland questions. That these are formed into scales and assumed to measure complex concepts is laughable.

ADVERTISEMENT

More than that, the NSS serves no purpose other than to rank programmes and institutions, which it was supposed not to do. The ranking is also meaningless, as the vast majority of institutions fall within a narrow range that is covered by sampling error. Yet there are ridiculous claims of top ranking on this or that interpretation of the tables, either for whole institutions or on a subject basis. The rankings are so silly that they deserve no further analysis.

What the NSS should be doing is acting as an improvement tool, but it is too generalised and the focus too much on rankings for this to be the case. It is an inadequate tool because it takes no account of the specific context and institutional setting, which is necessary in an improvement tool. The broad scale scores are no help to an institution in making targeted improvements. Indeed, some institutions have abandoned an excellent improvement tool that pinpointed problems using student-generated questions and then introduced diluted versions of the NSS. This is usually aimed at pre final-year students with the express aim of identifying areas where the ranking on the limited NSS scales is low and addressing them with a view to improving ranking rather than improving the student experience - and they are by no means the same thing.

ADVERTISEMENT

This NSS white elephant is, unfortunately, extremely expensive, and the money could much better be spent on real improvement. As so many bloggers have noted, "government league and performance tables are suspect" because "organisational performance has been changed in order to meet scoring criteria but with no improvement to the quality or efficiency of the service".

What we have is an illusion of a survey of student views. However, it is so superficial and so open to abuse as to be useless. A much better exercise would be to explore student engagement to find out what students really seek from their higher education experience, rather than imposing a set of categories that have no resonance for most students and don't address their real priorities.

Worse, the NSS scores are presented on the Unistats website as though they are a definitive indicator of student perspectives. Gaps occur because some institutions or courses don't have the magic minimum 23 respondents. The real crime of the NSS, though, is that some unsuspecting potential students might be taking the results seriously.

Lee Harvey was director of research and evaluation at the Higher Education Academy until he left his post last month by mutual agreement.

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT