What's good for the customer

May 15, 1998

Since its inception, The Times' Good University Guide has been eagerly read and heavily criticised. On the one hand, potential undergraduate students and their advisers have welcomed a simple guide to the quality of United Kingdom higher education; on the other, critics (largely from within the sector) have questioned the validity of the guide and in particular its centrepiece league table.

One of us first became involved in the guide as one of those critics. Subse-quently, we were asked to generate the statistical material, and this we have done for the past two years. We aim to bridge the gulf between those who seek objective measures of comparative quality and those who claim they are not available. In so doing, we hope to provide potential entrants to the sector and their advisers with objective, robust and relevant measures to take into consideration.

Our stance is unashamedly client-centred, with the reader and not the institutions having primacy of place.

We are well aware that the very word "good" is subjective. Although we have constructed a single league table for The Times, the intention is to provide a sufficient range of measures to enable prospective students to distil those of interest and discard the others. We also hope that the guide will encourage individuals and groups to question their preconceptions and undertake further research when deciding which university is "good" for them.

ADVERTISEMENT

The guide has undoubtedly promoted the return of better data to the Higher Education Statistics Agency by the universities, although there remains considerable room for improvement.

However, we are well aware of the complexities and shortcomings, and these were thoroughly aired and debated within a review group set up in 1996 comprising interested parties across the sector. We are committed to continuing refinement so as to provide the reader with as fair and accurate a picture as possible.

ADVERTISEMENT

All the raw data for the measures used this year were either provided by HESA from data obtained from the institutions themselves or published by the funding councils. The individual universities received a complete set of their own HESA data in case they wished to question their own return. This all takes time and it is a source of regret to us that we were only able to use data of 1995-96 vintage in a publication that sees the light of day in May 1998.

One difficulty is that some definitions change over time. This is perhaps best exemplified by the two series of assessments of teaching quality in England, which have results on different scales. Another difficulty is that universities housing unique national resources, such as the Bodleian Library at Oxford or Bath with its information and data centres, show inflated spending because of these resources. It could, of course, be argued that these national resources are where they are as a direct result of the unique expertise available at these places. On the other hand, many of their users are not members of the university where they are located and so crediting them to that single university might be regarded as unfair. Yet another difficulty is that all employment is included in the graduate destinations measure, though it has been suggested that short-term employment should be excluded.

This year we have reconsidered all the data provided to HESA that might be used and useful for comparative purposes. As a result we have introduced two new spending measures, namely academic computing and student facilities. We have also replaced entry requirements with actual entry standards and have taken the opportunity to leave out student accommodation. This is not because accommodation is not an important factor - it often is - but rather that there is widespread concern about the quality of the available data.

There are also issues to be faced in the use made of the data in constructing the tables. For example, how should the measures be scaled and should they be given equal weight?

ADVERTISEMENT

What of the future? The editor might like to return to an element such as completion rates if robust data become available. We also recognise that a measure based on A-level grades is less appropriate for universities that admit a large proportion of students with non-traditional entry qualifications. Similarly, there has been much clamouring - and much criticism - for the inclusion of a measure of added value. But it is difficult to see how this can be done in a fair and defensible way. For example, universities that come near the top of the only readily available input measure (A-level grades, where the maximum score is 30) would find it virtually impossible to demonstrate any added value regardless of whether or not they do in fact add value.

There is a case for attempting to capture more effectively the student experience and student satisfaction. This would need to be subject-specific but would probably have to be research commissioned in a similar way to the Australian Course Experience Questionnaire.

Finally, international comparisons might be made. This would no doubt be of interest to prospective students living overseas assailed by organisations and institutions from Australia, Canada, the UK and the United States who could be forgiven for being bewildered by the choice.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT