Alison Wolf

六月 29, 2007

whose performance at university doesn't match their school records';Opinion US-style SAT testing may prove a better predictor of suitability for university than A levels

Summertime - great, even with all those masters dissertations stretching away to the horizon. At least, for most of us, the admissions process is pretty well sorted by now. There is clearing to come, but the offers are made and the rest is reasonably automatic. The students get the results they need, are accepted and arrive - or, sadly, they don't.

This could change if the Government forces some sort of "post-qualification" admissions on us. We would have to re-evaluate, in the tiny window between A-level results and the start of the academic year, students who have done better than expected. Many people in the sector have objected to this - surely correctly - on grounds of both practicality and fairness. However, the idea has emerged in response to a genuine issue: that we decide whom to admit using very imperfect information.

A recent report from the National Foundation for Educational Research should, therefore, have attracted far more interest than it has. Use of an Aptitude Test in University Entrance - A Validity Study is the first major report from a government-funded trial in which more than 8,000 A-level students also took the American SAT Reasoning Test*. The trial will show how far scores on this test predict performance at university, and thus how useful they may be in identifying students whose potential is greater than their GCSE and predicted A-level grades imply.

The study grew out of a report in 2004 from the Admissions to Higher Education Steering Group, chaired by Steven Schwartz. Since then, additional testing has mushroomed, but mainly for highly oversubscribed undergraduate courses, such as medicine or law, to help differentiate between applicants with very high grades.

The Government, however, is interested in aptitude tests for the same reason it is pushing post-qualification entry: it worries about disadvantaged students. If something like the American SAT makes it easier to spot potential, it could help such students.

Hence the trial, which everyone in higher education should welcome, because more information can only improve our selection procedures. This would be helpful not only in the case of the disadvantaged, but for students whose performance at university does not match up with their school records, whatever their background. We are also all aware that A levels measure only a limited set of skills. As for the current system of giving points to an ever wider range of qualifications, it is no use at all when it comes to making hard choices between candidates.

The current report covers only the relationship between the SAT Reasoning Test* (which tests maths, reading and writing) and exam results, including GCSEs and A levels. University progress will come later. SAT scores, it turns out, correlate pretty highly with A levels, and even more strongly with average GCSE scores. This makes sense, because GCSEs cover a wider and more general range of skills than the often highly specific subjects chosen for A level.

But do the scores add anything important? The researchers examined which students had better-than-expected SAT scores given their A-level and GCSE performance. The measure of disadvantage was eligibility for free school meals. Students who were eligible did no better or worse on the tests, controlling for other factors, than would be expected from either their A-level results or A levels plus GCSEs. Those learning English as a second language actually did rather worse on the SAT than their academic grades predicted; and the same was true of some non-white groups, but not others.

Most striking, to me, were the gender effects - especially in the wake of another recent report, from the Joseph Rowntree Foundation, showing working-class boys doing really badly in school. Because it turns out that males are significantly more likely than females to have higher SAT test scores than their GCSE and A-level grades predict, even after controlling for ethnicity, disadvantage, school type and first language. Put the other way round, for a given SAT score you are significantly more likely to get three A grades if you are female.

I am not sure if this is what the Government was expecting. I also do not know why it happens: A-level content? exam question style? study habits? hormones? What it does confirm is that A-level results alone cannot be a perfect selection mechanism. No single measure can; which is the best argument for something like a SAT, irrespective of who, short-term, seems most likely to benefit.

Alison Wolf is Sir Roy Griffiths professor of public sector management at King's College London.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT