The standards ball Labor set rolling in 1992 failed to reach its target but rocked university complacency on its way, claims Don Aitkin.
The Australian government is for the first time considering making substantial cuts to the operating grants of universities. One cut is already plain, since it could be inferred from the government's election policy document on higher education: quality assurance in higher education.
Australia went down the quality track in 1992, when Peter Baldwin, then Labor minister for higher education, secured Aus$70 million (Pounds 35 million) for each of the next three years from his somewhat reluctant cabinet colleagues to be spent on quality enhancement in higher education.
This was not what the universities wanted; they had been seeking higher funding per student, on the grounds that Commonwealth support for higher education had fallen. Peter Baldwin's unexpected coup caught vice chancellors in particular by surprise.
How would the "quality money", as it was ever afterwards called, be distributed? It did not take the Australian Vice Chancellors Committee long to decide that the allocation should be in the hands of the universities rather than the government, even if it meant university independence was somehow compromised - better that than a tribe of government inspectors poking around.
The result was a Committee for Quality Assurance in Higher Education, CQAHE. As the acronym could not be pronounced felicitously, the committee was usually referred to as the "quality committee" or sometimes, after its chair, "the Wilson committee". Brian Wilson was vice chancellor of the University of Queensland, a former president of the AVCC and regarded as straightforward and honourable.
It was not a task that he accepted with enthusiasm. His committee colleagues were academics with some claim to knowledge in the field of quality assurance plus people from outside higher education with comparable experience. The committee had a tiny staff from the Department of Employment, Education and Training, supplemented from the universities themselves during university visits.
The work of the committee began before there had been any real discussion of its role. Early in 1993, Peter Baldwin was replaced by Kim Beazley, who had rather different expectations of the outcome.
Baldwin had not wanted an explicit ranking of universities, Beazley did. It can safely be said that no vice chancellor wanted a ranking outcome if the result were to place his or her own university in other than the first rank. Beyond that there was not much agreement.
So the committee began its first round in 1993 without any clear sense on the part of the universities about what would be wanted.
By the time the committee reported there had been another ministerial change, with Kim Beazley replaced by Simon Crean. The new minister was happy to accept the committee's findings, whatever they were, and had no fixed views about quality in universities.
The result was a bit of a muddle. All universities were pronounced to be excellent, but there were six ranks of excellence, perhaps from "really and truly excellent" to "decently excellent, all things considered". The older universities were mostly to be found in the top ranks but not all were, and there were a few surprising judgements of relative order. The money was to go mostly to the excellent and all of it was to support excellence, not to repair deficiencies.
Round two was to focus on teaching, an area in which the newer universities expected to do rather better. The committee made a few changes, while the new minister made clear that he saw no need for the ranking system to be the same as that for round one.
By now universities were learning the game. The more successful participants in round one had their submissions studied in the spirit of true scholarship, people who had been attached to the committee found themselves unexpectedly popular as invited visitors to other universities, and those who were responsible for university submissions tried their hands at such theatrical activities as casting, script conferences and dress rehearsals.
The committee, for its part, tried to even things out by letting it be known that it was awake to ploys of this kind and would discount them. The outcome made it abundantly plain that the committee was no better able to decide what good teaching was than anyone else.
Inevitably, perhaps, it concentrated on quality assurance policies, which produced results pleasing to the older institutions but hard to swallow as a statement of excellence in practice.
Many of the newer universities, which had concentrated on teaching because they were not funded for research, once again found themselves towards the bottom of the pile because teaching excellence was integral and implicit in what they did, rather than additional. But the ranking system was new and the allocation of money much more even. There was less furious complaint.
Round three focused on research and "community links". By now those universities which saw their standing related to the annual quality exercise put even more effort into their submission, data and performance on the day.
The committee's findings were once again a mixture of the predictable and the unexpected, and the allocation of funds was even closer to one based simply on the size of the operating grant. Professor Wilson went off, no doubt thankfully, to retirement.
The Australian approach to quality auditing was unique in its evaluation of institutions as a whole, rather than by discipline, and in its coupling of review with reward. The provision of significant funding undoubtedly influenced the commitment to and pace of change.
Peter Williams, who heads the audit group in the Higher Education Quality Council in the United Kingdom believes that the process introduced greater cultural change in Australian universities than elsewhere.
The concept of quality and the introduction of processes to evaluate it are now a conscious component of the planning in all Australian universities.
The election of a new government seems to have brought to an end the notion of an annual quality exercise, and even put into doubt the universities' receipt of the proceeds of round three.
After three years universities had dropped much of their opposition to the inquisition on quality. There is little doubt that one excellent outcome was the much greater attention to quality issues, especially in teaching. A couple more rounds, one devoted to management and one to information technology, might have been a good thing and would have maintained the momentum for explicit quality assurance.
As for rankings, no one can now remember what those of rounds one and two were for any university other than their own, and round three's pecking order is fast being forgotten. The universities have survived being ranked.
A commonsense appraisal might be that Australian universities are pretty good in comparison with any other national set of universities, that each has some areas that are outstanding, and that each has areas that could do with improvement.
The committee's work has probably improved quality and for a much longer time span than the three years of its existence. It would be hard not to grade its own work "effective".
Don Aitkin is vice chancellor of the University of Canberra.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login