Do student feedback surveys offer real customer care or are they just a marketing ploy? Lee Harvey explains the University of Central England's pioneering approach to satisfaction surveys
Getting students to fill in "satisfaction" questionnaires about their lectures is now standard practice throughout higher education and reflects growing pressure from the quality movement and from paying students. But are student surveys really an active demonstration of customer care or are they little more than a good marketing ploy that does more harm than good?
While everyone is doing it, too often emphasis is placed on intrusive and unhelpful questionnaires that reflect the concerns of teachers not students. The results rarely do more than scratch the surface and over-use of questionnaires leads to poll fatigue, uninterest and scepticism among students about the chances of improvements ever coming off.
The University of Central England has run what it calls the Student Satisfaction approach for some years now. It has attracted observers from as far afield as New Zealand, Poland and Hong Kong, and relies on focus groups to explore student likes and dislikes linked to regular surveys.
The results are reported without confusing statistics and outline clear areas for improvement. All these feed in to a well-established system for action led by the vice-chancellor, For example, at the height of student expansion a few years ago, we received very poor student ratings for some aspects of our library service.
There was just not enough material to go round. As a direct consequence, our governors agreed to spend Pounds 1 million to improve the library stock. Library ratings have steadily gone up and are now satisfactory.
Most other improvements tend to be more parochial. For example, students wanted changes to our centralised word processing and computer facility.
They wanted better print quality, lower prices and changes to the layout of the room. All of that has now happened.
As far as teaching and learning are concerned, we have reacted to comments about the amount and type of lectures, the number of placements and so on.
All our social science staff have been asked to post availability times on their office doors when students raised concerns about contact times and that has had a dramatic impact.
It is often small changes like this that can have the biggest effect on students' lives. Of course, we cannot just say yes to everything they want.
Every year students complain about the lack of car parking and the vice chancellor simply tells them to use public transport. And some want more personal tutorial time which just is not available because of resource constraints.
There are three types of student feedback: institutional social surveys; student questionnaires on individual teacher performance; and qualitative surveys of course effectiveness.
Surveys of students' views on individual teachers' performance have a short shelf-life. A couple of complaints can help to highlight problematic teaching or inadequate assessment procedures but are not the whole story.
Such surveys can be a blunt instrument and should never be used alone.
They lack discrimination, tend to reinforce "safe" teaching at the expense of innovative learning facilitation and are a poor medium for monitoring subsequent action, should any result.
The focus on the individual teacher can get too personal, which militates against student feedback.
Institutional surveys not only give a wider view but are also easier to link into an accountable management structure. If they are to be successful they must combine the "bottom up" collection of views from students with a "top down" response.
Despite the "customer rhetoric", students are not repeat purchasers of products but participants in a learning process designed to improve their life chances. Feedback will only play a significant role in empowering students if it leads to action.
There are three essential elements to this. First, students must be able to raise issues that are important to them.
Too often questionnaires are based on what managers or teachers think are important to students.
Second, there must be an assessment of what is important to students as well as what is satisfactory. For instance, students may be dissatisfied with both the availability of books in the library and the decoration in the student union building but are unlikely to see them as equally important.
Third, there must be an explicit action cycle with clear structures for delegating responsibility for change and for providing feedback on action to students.
Simply producing a report of findings is not the end of the process. Action must be seen to result from students' feedback otherwise they will become sceptical of the process and less willing to take part.
There should also be a clear idea of why the information is being collected, what sort of decisions it will affect and how the information will help to implement change.
The whole action cycle is rare. In many cases, student feedback is collected but not properly analysed. Where it is analysed, it is often consigned to reports littered with statistics and tables that give little guidance for action, and have limited circulation.
Furthermore, the "feedback loop" is usually not closed: students are unlikely to be informed about the actions that follow their responses. Even rarer is an approach that plots changes and action on an annual basis.
We believe the Student Satisfaction approach pioneered at the UCE remains unique in combining these three elements in a flexible methodology, readily adaptable to different cultural contexts and stakeholder groups.
It is important to get it right. Higher education can no longer get away with the idea that teaching students is an activity that goes on behind closed doors.
Lee Harvey is head of the Centre for Research into Quality at the University of Central England in Birmingham.
Full details of Student Satisfaction on www.uce.ac.uk.