Academics wouldn’t drink to the handover of marking to AI

Would someone who found effective software for their department never have to buy a beer again? Not according to my survey, says Paul Breen

一月 27, 2024
Toasting with glasses of beer
Source: iStock

Generative AI has already left its mark on higher education, with many university teachers encouraging their students to use it to help with their research and idea organisation. But could the technology also be trusted to take care of student assessment? My own survey of more than 30 university lecturers suggests that the jury is still out.

There is certainly a realisation that, for better or worse, AI is here to stay. Presently, much of the focus is on Chat GPT, but that’s just the first wave of the new electronic tide sweeping through higher education. We’re already starting to see software that’s more tailored to specific needs, such as the marking of assessments.

But is there any substance behind the sales pitches?

Across the board, teachers see benefits to generative AI when it comes to “automating routine tasks like multiple-choice questions” and “providing quick feedback on objective assessments”. One respondent, for instance, suggested that for questions to which there are straightforwardly correct and incorrect answers, such as defining vapour pressure, “AI can do a great job”. Similarly, the technology might be effective in giving an indication of a non-native speaker’s English-language ability.

However, AI is perceived to be unable to assess less quantifiable attributes, such as “expert knowledge”, “ethical awareness”, “compassion”, “critical thinking” and “creativity”. One respondent remarked that generative AI displays “little of the contextual understanding that is essential in academic assignments”. Overwhelmingly, teachers see the need for human judgement, especially in quality assurance procedures, and several cited students’ own expectations that their work would be marked by a human being.

There was also strong pushback against the common perception that teachers do not enjoy the marking process. Most recently, I heard someone remark jokingly that, if they could find effective marking software for their department, they’d never again need to buy a beer. But it’s not marking that teachers don’t like – it’s heavy workloads and unreasonable turnaround times that offer no scope for thorough feedback. Marking assignments, particularly formative ones, are an essential part of getting to know students. More than that, they “provide a learning experience for the assessor themselves, giving insight into the efficacy of their teaching”, one respondent added.

There were also concerns about the ethics of AI, both in terms of privacy and the perception that such tools “reproduce already existing practices and exclusionary ideologies that exist in society”.

But there’s also a strong awareness that, however much they might feel like doing so, teachers can’t find “a small sandhill to bury ourselves in” when students and wider society are already using AI so freely. As one respondent put it, we have to confront and “grasp the opportunities now so that we lead AI, not the other way round”.

And respondents recognise that AI could help make learning more “tailored”, “individuated” and “personalised”, allowing for better “guidance and mentoring”. It could also be used at an aggregative level to provide feedback to whole cohorts and to “analyse student performance trends and identify patterns” that the human eye might miss. That, in turn, could feed back into individuated learning, support and targeted interventions.

None of that, however, precludes the need to have the teacher in overall charge of the assessment and analysis process. Ultimately, the goal of integrating AI assessment into teaching could be to make “the use of AI as (ir)relevant as using the internet”.

In that way, teaching will again trump technologies. That, though, is only going to happen if both teachers’ and students’ voices are heeded. There’s a very real danger that the adoption of AI marking systems could be driven by university business models, rather than teaching and learning frameworks. This mustn’t become another exercise in cost-cutting disguised as freeing teachers from the burden of marking. Rather, it’s got to be seen as an investment for an ever more high-tech future – ideally, an investment that also includes “reducing class sizes, employing more teachers and investing in assessment development”.

Either way, effective AI adoption will require investment in “rigorous testing and validation studies”, and time will need to be devoted to rethinking and “retraining [in] how we assess and how we deliver”, rather than expecting instant adaptation and integration.

For now, generative AI is best used as “a supplementary tool”, assisted by “human oversight”, as one respondent put it. Without that human element, there might be issues around student well-being, since education nowadays is seen as being primarily about the social aspect of learning, rather than mechanised processes of knowledge acquisition.

One further thought: with AI’s most distinctive skill being that of automation and organisation, perhaps higher education ultimately has more need for AI managers than for AI markers. But which human manager is going to sign off on that?!

Paul Breen is a senior EAP digital learning developer and lecturer (teaching) at UCL.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

相关文章

Reader's comments (2)

If using multiple choice in HE was a sensible idea [a huge IF] you would not need AI to mark it. Simply a suitably formatted answer book and an optical scanner suitably calibrated. However as your correspondents note, for most sane academic disciplines, context is king, and identifying the critical analysis in written work is vital to both student and lecturer. Comparisons between AI and the internet are premature on the latter the jury has not yet retired to consider its verdict.
“ There’s a very real danger that the adoption of AI marking systems could be driven by university business models, rather than teaching and learning frameworks.” I’d say it’s not a very real danger, it’s a foregone conclusion that will happen.
ADVERTISEMENT