Nine in 10 UK undergraduates now using AI in assessments – survey

One in four students tell Hepi they use text generated by tools such as ChatGPT in submitted work

February 26, 2025
A woman browses OpenAi website on her laptop. On the screen it says "Ask ChatGPT anything". Illustrating increased use of AI by students in part of their assessments
Source: Serene Lee/SOPA/Getty Images

Nine out of 10 UK undergraduates who responded to a survey have admitted using generative artificial intelligence in assessments, suggesting use of tools such as ChatGPT has become routine among students.

In a poll of 1,041 undergraduates for the Higher Education Policy Institute, 88 per cent of respondents said they had used generative AI in assessments over the past year – up from 53 per cent in 2024.

Report author Josh Freeman said that this “extraordinary growth” indicated that AI was becoming “an integral aspect in the process of doing exams”.

One in four students said that they had used AI-generated text to help them draft assessments, but students more commonly used ChatGPT and similar tools to explain concepts, summarise relevant articles or suggest ideas.

ADVERTISEMENT

Students using AI in assessments were not necessarily breaking the rules, with 59 per cent agreeing that their universities had changed how they conducted assessments in response to the rise of generative AI and three-quarters confident that their institution could spot AI use in assessed work.

However, only three in 10 agreed that their institution encouraged them to use AI, with only four in 10 agreeing that AI-generated content would get a good mark.

ADVERTISEMENT

Students were most likely to say that they used AI tools to save time or improve the quality of their work, and were most likely to say that they would be put off from doing so by the fear of being accused of cheating, or the risk of getting false results – so-called hallucinations.

The report recommends that universities should “continually review” assessments to keep up with the growing power of AI tools.

“Exams which are easy for someone to achieve a high grade in if they have good AI skills, but without engaging deeply with the course, should be immediately rewritten,” writes Freeman, Hepi’s policy manager.

“Closed-book examinations where the questions are predictable (and therefore can be prepared in advance using AI tools) are not immune.”

The report further recommends that universities should “adopt a nuanced policy which reflects the fact that student use of AI is inevitable and often beneficial”, and that staff involved in setting exams “should have a deep working understanding of AI tools”.

ADVERTISEMENT

“If their assessments are robust, institutions need not fear educating students on the responsible use of AI, where it can genuinely aid learning and productivity,” the report says.

The survey found that the proportion of students reporting using an AI tool for any task had risen from 66 per cent to 92 per cent over the course of a year, and Freeman told Times Higher Education that he expected usage to be near-universal by next year.

However, while 67 per cent of students agreed that it was essential nowadays to understand how to use AI effectively, only 36 per cent said that they had received support from their university with their AI skills.

ADVERTISEMENT

Forty-two per cent said staff were well equipped to support them to work with generative AI, although this was a significant increase from 18 per cent last year.

Students told Hepi they felt that their university provided them with “mixed messages” over AI, with one student complaining: “It’s still all very vague and up in the air if [and] when it can be used and why. It seems to be discouraged without the recognition that it will form an integral part of our working lives.”

Other students were more sceptical of AI use, and one said “it feels like cheating”, while others said “it will completely ruin what actual work will look like” and “I feel like AI is good but just gives me the answers”.

The report also described how a “digital divide” identified in 2024 appears to have widened, with those from wealthier backgrounds more likely to use AI tools.

ADVERTISEMENT

juliette.rowsell@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (1)

new
As far as I have been able to tell, students may be able to pass a class by using AI in essay writing. But it seems to produce C-level work. Papers that look like you could write by inserting information from Google, so often very vague claims that are not up to the level of detail that one gets from actual reading an thinking (those are the B and A papers). We also have more students failing than ever before, and making up citations (always more obvious than they think). This indicates a growing lack of interest in the origins of ideas or ownership of them, which is partly due to the current idea that everybody owns all ideas so whatever. As a teacher, I cannot encourage the emphasis on vague, unsupported claims or the notion that nobody ever has original ideas so whatever. My job is to help students learn to think. The hype about AI has made actual teaching so much more difficult. I am tired of stories trying to indicate that I need to help students use this tech. They are using it whether I forbid it or not. I cannot make it operate for them at the level that their own original thinking and creativity from reading and processing can. I cannot support extracting them from the actual thinking process. Frankly, the hype about AI should be reconsidered.

Sponsored

ADVERTISEMENT