Assessment and curriculum design can’t ignore how students use AI

Instead of viewing GenAI only as a threat, we should embrace it as an opportunity to reform our outdated approaches, says Dorottya Sallai

July 23, 2024
A visual representation of a chatbot
Source: iStock/Supatman

Like it or not, the vast majority of students are already using generative artificial intelligence (GenAI) tools in their learning and assessments. Yet most courses continue to ban it and are locked in fierce debate about how to detect the technology’s use as traditional plagiarism detection methods prove increasingly inadequate, making misconduct regulations unenforceable.

This is a ludicrous situation to be in. Ignoring the changes that GenAI has brought to our education system only lets our students down. They use these tools in their everyday lives and want to use them in the classroom as well, yet they are often just as lost as we are. We need to teach them how to use these tools responsibly, with integrity, to enhance their learning.

With a team of colleagues, I recently surveyed and studied the GenAI habits of 220 students on four undergraduate and three postgraduate courses – both quantitative and qualitative – at the London School of Economics. Among other things, we asked students to create a chat log of all their course-related GenAI conversations and share this, and their brief reflections, with us each week. On some courses, we also allocated in-class time for students to work independently on challenging tasks using ChatGPT, while limiting free web browsing and peer interactions.

The widespread reliance on GenAI among students presents a double-edged sword. On the one hand, these tools offer unprecedented assistance. For instance, we found that students often used them to summarise assigned course texts, reducing the effort required to get through long reading lists.

On the other hand, relying on chatbots without proper guidance can lead students to miss developing essential critical thinking and problem-solving skills. Our study reveals that students often use GenAI tools merely to manage workloads, rather than to deepen their understanding of subjects.

This is not merely a matter of convenience or laziness. It is often a coping mechanism for the pressures of contemporary academic life. Labelling the use of GenAI simple academic misconduct misses that broader issue. Still, the pedagogical implications of GenAI reliance are potentially severe.

Students frequently accept chatbots’ responses unquestioningly, mistaking authoritative tones for factual accuracy and failing to use their own common sense. This misplaced trust can result in lower-quality work. In our study, for example, some students submitted AI-generated copy that, while functional, did not meet the assignment’s objectives.

To address the challenges, we must rethink our approach to curriculum and assessment design, especially for non-invigilated tasks such as essays, open-book assignments, problem-solving questions or case studies.

We should assume that students are using GenAI, even if we don’t use these tools in our own course, and we should place some emphasis on teaching them to be critical of its outputs. That might require developing teaching faculty’s own GenAI literacy by providing training, access to data-protected GenAI platforms and technical support. Experimentation with GenAI tools can inspire educators to rethink traditional assessment and teaching methods.

On the former, we believe that we should separate the learning process from the assessed “product”. Defining the steps students should follow as they develop their assignments and requiring them to continuously document their progress – with or without GenAI’s help – provides opportunities for feedback even if the documentation isn’t graded. Students can also be asked to submit their assessments in stages or to give regular short live pitches, presentations or online video updates.

It is also wise to delay the adoption of GenAI when introducing a new topic or skill. Our preliminary analysis suggests that students benefit most from using these tools when they clearly understand a task’s purpose and have already grasped the basic underlying concepts needed to complete it.

When students in our study received feedback and guidance on using GenAI, some chose to abandon them, recognising the negative impact on the quality of their work. Others used them to deepen their interest in the subject and the quality of their work significantly improved.

This is the kind of best-case scenario we should be aiming for. The reality of GenAI’s widespread use and near undetectability cannot be ignored or fought against, but instead of viewing GenAI only as a threat, we can and should embrace it as an opportunity to reform some of our outdated approaches.

Progressive leadership needs to recognise, however, that this task cannot be resolved by enthusiastic and hard-working faculty alone. We need a whole-institution approach to transforming curricula and assessment based on innovation and forward thinking.

Dorottya Sallai is associate professor (education) of management at the London School of Economics and Political Science and chair of the LSE’s AI working group. She received the LSE Student Union’s Outstanding Teaching Award last year and was highly commended in the same category this year. The study has been written up in a paper, “Approach Generative AI Tools Proactively or Risk Bypassing the Learning Process in Higher Education”, published by the SSRN.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored