Is assessment integrity possible in the age of AI?
The existence of generative AI does not mean that genuine assessment has become impossible. Reframing universities’ approaches and engaging students can enhance integrity
There has been an increase in concerns around academic integrity in assessment since the dawn of generative AI tools such as ChatGPT. Ishan Kolhatkar, global client evangelist at Inspera, pointed out in his keynote at the 2024 THE Digital Universities Asia event that integrity has always been an issue for universities and will continue to be as technology evolves. “It is important for our students to be able to leave with qualifications saying they came from a reputable institution that ran its assessments properly,” Kolhatkar said. “But technology is neither the source of the problem nor will it be the solution by itself.”
Rather than retreating from the challenges of AI by returning to paper-based or other traditional modes of assessment, universities need to prepare students for an AI-fuelled world, said Kolhatkar. “Students today – the future leaders of tomorrow – need to understand how they should be using AI tools and learn from a generative AI model that works in the way we want AI to work,” he added. Rather than allowing technology to drive pedagogy, academics should be encouraging students to learn alongside AI in a way that enables their knowledge to be assessed properly.
Through his own children, Kolhatkar has seen how AI is transforming the world of education. “I see a difference in their work if they have engagement with what they’ve been asked to do,” he said. “If something interests them, I see amazing work. But when it’s less interesting, it’s often regurgitated. We can provide flexible assessments with integrity to our students, but this requires us to redesign, re-engage and re-energise the assessments we’ve already got.”
One of the challenges is that as digital education has evolved at speed in recent years, traditional assessments have strayed further from where they should be, Kolhatkar said. Too many assessments expect students to “just go away and type”, he added. He cited a US study which found that before generative AI, 70 per cent of students would consider cheating. The number since its introduction was exactly the same. “All that had changed was the method. The main reason they wanted to cheat was because they were not interested in the task or the assessment,” he said.
Kolhatkar said that a more flexible and realistic approach would involve reframing assessments to allow students to gather information using AI and measure how effectively they use the information. “Once upon a time, teachers told us we had to learn our times tables because we would not have a calculator in our pocket. Now we have a device with all the world’s accumulated knowledge in our pocket. That does not mean we are all the same. It’s our ability to use that information that differentiates us,” he concluded.
The speaker:
- Ishan Kolhatkar, global client evangelist, Inspera
Find out more about Inspera.