DIY assessment for beginners

二月 3, 1995

John Stoddart, chairman of the Higher Education Quality Council, made an eloquent case in The THES (December 9, 1994) for a "supportive, collective self-regulatory system" of quality assurance for British higher education.

What he omitted to acknowledge fully was the contribution to this desirable outcome made by the assessment of the quality of teaching and learning, as established by the Higher Education Funding Council for England under statutory guidance, but following consultation with and guidance from the institutions in the sector.

Assessment of the quality of the student learning experience and of student achievement, carried out by external peer review, is a vital element of the "self-regulatory" framework and should be preserved in whatever framework for quality assurance emerges from the current discussions.

It is an unpalatable fact for those who wish to convert the ongoing debate about the quality assurance systems set up after the 1991 White Paper into a battle for supremacy between "audit" (and the HEQC - owned by the institutions) and "assessment" (carried out by the funding founcils) that the HEFCE has largely succeeded in delivering to the sector the system it said it wanted.

More recently the council has agreed significant changes to that system following the overwhelmingly expressed preferences of the institutions. Institutional advice has resulted in the central role for self-assessment and professional peers, and the respect for the right of autonomous institutions to set, and be judged against, their own aims and objectives.

In such an ambitious undertaking there have, of course, been problems and difficulties, but there has also been a mass of solid achievement.

At the time of writing, about 1,000 self-assessments have been received and analysed by assessors engaged by the quality assessment division of the HEFCE, about 500 assessment visits to institutions have been completed or arranged in the first 15 subjects assessed, and about 300 reports have been published, together with overview reports on the first four subjects.

More than 800 specialist assessors have been trained and put in the field and close to 500 have been recruited for the 1995/96 programme.

As a result of assessment there is strong evidence of more serious and systematic scrutiny of teaching and learning performance by institutions, of greater attention to the professional development of lecturers and other learning support staff and of consideration of how the infrastructure of universities and colleges can meet the needs of students.

The experience of being an assessor has also been valued and has in some ways made up for the loss of the subject-based professional networks which were a feature of the regime of the former Council for National Academic Awards.

Further, institutional managers have been quick to react to the marketing potential of high ratings.

Perhaps, more importantly, action has been prompted in those few cases where peer assessment has concluded that students were getting a demonstrably unsatisfactory deal.

The chief frustration for institutions has been the continued duality of audit and assessment, particularly when there is "felt" to be overlap; as when auditors undertake "trails" through departments and subjects, and assessors seek evidence about systems. The issue is compounded by uncertainty about the pace and timing of the future programme, which inhibits synchronisation of internal processes (including linkage with professional bodies).

From April, the quality assessment method will be modified, again as a result of substantial external consultation and review. Consultation with the sector during 1994 focused on the possibilities of developing a "core set of aspects of provision" to structure the self-assessments and improve consistency of judgement, on the proportion of providers to be visited, the number of points on the assessment scale and on steps to improve "assessment within mission and aims and objectives". It also included a summary of the council's response to the independent report it had commissioned from Ronald Barnett and his collaborators at the Centre for Higher Education Studies.

The results of the consultation were reported in October, 1994. Within the 118 responses from institutions there was strong support for universal visiting (79 per cent) and for development of a core set of aspects of provision (94 per cent, although 35 per cent suggested improvement or modifications to the profile proposed). Another decisive majority (83 per cent) made choices involving a graded profile and 65 per cent wished the overall summative judgement to be cast as a simple "threshold" statement.

Each of these firmly expressed preferences is reflected in the circular, which describes the council's intentions from April. A core set of aspects of provision is defined and non-prescriptive guidance is given on how they can be used to structure a self-assessment.

A commitment is made to visit all providers and criteria are established identifying characteristics of a successful visit. A description is also given of how judgements will be reached and recorded. The expectation is that greater understanding, on all sides, of the importance and scope of the self-assessments, in particular in testing achievement of departmental or subject aims and objectives, will play a major part in improving the process.

In this way the council has demonstrated its willingness, and its capacity, to respond to sectoral concerns and to advice. Worryingly, however, it has not succeeded in reducing or deflecting the critique from certain quarters.

A number of public statements about the process have remained belligerent and inconsistent. The THES, for example, began by describing the CHES recommendations as a formula for uniting the system and as a "lifeline". Now that many of the CHES proposals, including universal visiting, have been delivered, they are apparently "humiliating" and "baroque".

Self-regulation is difficult, as a number of precedents confirm. However effective such systems may feel to "insiders", "outside" stakeholders are difficult to convince in hard cases.

In the United Kingdom system of higher education a balance has been established between institutional responsibility and external scrutiny by professional and academic peers, leading to enhancement of quality as well as discovery and rectification of that which is unsatisfactory. This outcome has been hard-won and is to the credit of all concerned. There is no sound evidence that it offends institutional autonomy and choice of mission while it also serves to reassure those concerned about accountability for public investment.

In responding to the Secretary of State's invitation to suggest improvements, the funding council and the representative bodies should have more audiences in mind than their natural constituencies within the institutions.

The system can undoubtedly be streamlined. For example, as institutional systems of self-assessment develop, making use of external peers, the present intensity of visiting could well be moderated in the medium term. Similarly, a single agency, in which all parties can have confidence, is now an achievable goal.

But much of the hyped criticism of the current system can appear tactical and evasive and could come to threaten its achievements in the eyes of legitimate stakeholders.

In the spread of systematic external peer review, the argument for key elements of self-regulation, including assessment, has already been won. "Self-regulation" must now be maintained and developed in a way which preserves these gains and maintains public confidence in our enterprise.

David Watson is the director of the University of Brighton and the chairman of the Higher Education Funding Council for England quality assessment committee.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT