Student AI cheating cases soar at UK universities

Figures reveal dramatic rise in AI-related misconduct at Russell Group universities, with further questions raised by sector’s ‘patchy record-keeping’ and inconsistent approach to detection

十一月 1, 2024
Exam invigilator walking past artificial intelligence robot caught in the spotlight. To illustrate the increase in cases of student AI cheating at UK universities.
Source: Alamy/iStock montage

Academic misconduct offences involving generative artificial intelligence (AI) have soared at many leading UK universities with some institutions recording up to a fifteenfold increase in suspected cases of cheating.

New figures obtained by Times Higher Education indicate that suspected cases of students illicitly using ChatGPT and other AI-assisted technologies in assessments skyrocketed in the past academic year while the number of penalties – from written warnings and grade reductions to the refusal of credits and failure of entire modules – has also increased dramatically.

At the University of Sheffield, there were 92 cases of suspected AI-related misconduct in 2023-24, for which 79 students were issued penalties, compared with just six suspected cases and six penalties in 2022-23, the year in which ChatGPT was launched. At Queen Mary University of London, there were 89 suspected cases of AI cheating in 2023-24 – all of which led to penalties – compared with 10 suspected cases and nine penalties in the prior 12 months.

At the University of Glasgow there were 130 suspected cases of AI cheating in 2023-24 – with 78 penalties imposed so far and further investigations pending, compared with 36 suspected cases and 26 penalties in 2022-23.

THE’s data, obtained via Freedom of Information requests to all 24 Russell Group members, could also raise questions about the inconsistent approach of UK universities to implementing and enforcing AI-related misconduct rules, with some universities reporting only a handful of misconduct cases or claiming to have seen no suspected cheating at all.

The London School of Economics said it had recorded 20 suspected cases of AI-related misconduct in 2023-24, and did not yet have data for penalties, compared with fewer than five suspected cases in 2022-23. Meanwhile, Queen’s University Belfast said there were “zero cases of suspected misconduct involving generative AI reported by university staff in both 2022-23 and 2023-24”.

Other institutions, such as the University of Southampton, said they did not record cases of suspected misconduct, and where misconduct was proven it did not identify specific cases involving AI. The universities of Birmingham and Exeter, as well as Imperial College London, took similar approaches, while the universities of Cardiff and Warwick said misconduct cases were handled at departmental or school level so it would be too onerous to collate the data centrally.

Thomas Lancaster, an academic integrity expert based at Imperial, where he is senior teaching fellow in computing, said the sector’s “patchy record-keeping relating to academic misconduct is nothing new” but “it is disappointing that more universities are not tracking this information [given] the ease of which GenAI access is now available to students”.

“University policies regarding GenAI use are so varied and many universities have changed their approach during the past year or two,” continued Dr Lancaster, adding that “defining and detecting misuse of GenAI is also difficult”.


Can we detect AI-written content?


“I am concerned where universities have no records of cases at all. That does not mean there are no academic integrity breaches,” he added.

Michael Veale, associate professor in digital rights and regulation at UCL, said it was understandable there was not a consistent approach given the difficulty in calling out AI offences.

“If everything did go centrally to be resolved, and processes were overly centralised and homogenised, you’d also probably find it’d be even harder to report academic misconduct and have it dealt with. For example, it’s very hard to find colleagues with the time to sit on panels or adjudicate on complex cases, particularly when they may need area expertise to judge appropriately,” said Dr Veale.

Des Fitzgerald, professor of medical humanities and social sciences at University College Cork, who has spoken out about the growing use of generative AI by students, said he was also sympathetic to institutions and staff in grappling with generative AI misconduct because it is “generally not at all provable, even where somewhat detectable”.

“You might expect to see more strategic thinking and data-gathering happening around AI in assessment, but the reality is that this has all happened super quickly in terms of university rhythms of curriculum development and assessment,” said Professor Fitzgerald.

Instead, responsibility for the rise of AI-assisted cheating should lie with “the totally irresponsible and heedless ways these technologies have been released by the tech sector”, he said.

“The reality is that this is a whole different scale of problem than what typical plagiarism policies or procedures were meant to deal with – imagining you’re going to solve this via a plagiarism route, rather than a whole-scale rethinking of assessment, and certainly without a total rethinking of the usual take-home essay, is misguided,” said Professor Fitzgerald.

Noting how Ireland was now developing a national policy for AI use in higher education, Dr Fitzgerald said there was a need for “strong regulatory and legislative attention from national governments”.

“It's just not reasonable to expect universities, and especially totally overburdened teaching and policy support staff, to resolve this alone, institution by institution, department by department.”

jack.grove@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

相关文章

Reader's comments (6)

Queen Mary "there were 89 suspected cases of AI cheating in 2023-24 – all of which led to penalties – compared with 10 suspected cases and nine penalties in the prior 12 months." Seems to be no one cares, so long as they pay their fees.
Requires a change to the method of assessment - Time and resources would be better spent invested in this than trying to identify potential cheats. AI is here to stay and will only become more sophisticated.
A reluctance of many universities to return to on campus based exams (which are not the greatest form of assessment anyway and needed an overhaul) and a general reduction in the number of assessments has meant coursework in most of its forms is opened up to huge risk of AI use. Combine that with the sheer effort an academic has to go to to show any evidence of misconduct when they know 99% of the time nothing will happen in order not to impact progression rates too much, and it's no wonder AI use is winning in some areas.
A return to formative in-course assessment and final assessments done under controlled conditions is needed if degrees are to retain their value. I would also like to see all graduating students do national literacy and oracy tests, with a separate graded certificate. These would be of particular value to international students, crediting the whole of their experience while in the UK.
Surprised that this article didn't mention the fact that UUK rejected as a bloc the proposed Turn it In tool for AI writing detection. They had good reasons but UK academics are left in a hard position trying to robustly detect AI writing .
A return to exam based assessments is the best solution. I sympathize with universities in terms of the difficulty of detection, there is presently limited effective means of definitively detecting AI generated text - false positives are common. I've ran work written years before the proliferation of generative AI through various commercial AI detection solutions and have had many flagged falsely as AI generated, especially more technical essays. This makes me believe the claims that Turnitin makes about the effectiveness of their AI detection algorithms to be somewhat dubious.