Logo

We have to rethink academic integrity in a ‘post-plagiarism era’

What is the future of plagiarism as a concept in the AI age and what are the implications for assessment? This resource seeks to answer these questions, among others

Karen Kenny's avatar
15 Jan 2025
copy
0
bookmark plus
comment
2
  • Top of page
  • Main text
  • More on this topic
copy
0
bookmark plus
comment
Students in an exam hall being assessed
image credit: iStock/Drazen Zigic.

Created in partnership with

Created in partnership with

University of Exeter

You may also like

ChatGPT as a teaching tool, not a cheating tool
3 minute read
Students comparing notes and work in pairs

Popular resources

Plagiarism as a concept is relatively recent, and its definitions have evolved over time. It is widely known that Shakespeare, for example, took whole passages from earlier works and included them in his own writing. However, we do not generally accuse him of academic misconduct. 

AI tools are here to stay and will only become more sophisticated. It is futile to try to ban their use. We already accept the submission of some AI work, but now find it increasingly difficult to define what is acceptable and what is not. The concept of “post-plagiarism” first emerged in Sarah Eaton’s book Plagiarism in Higher Education and offers a way to move beyond defining plagiarism, focusing instead on ethical learning. As educators, we must therefore learn how to teach and assess in a world where descriptions of academic conduct have changed.

Eaton’s tenets of post-plagiarism 

Hybrid human and AI writing will become normal: there is little, if any, writing being created now that does not have some input from AI. Every time I begin to type, an AI tool is there to “help” me, whether in the form of auto-correct, spelling and grammar check, or any other tool. Written content being created now is likely generated, at least in part, using AI tools.

Human creativity is enhanced: AI can be a stimulus to expand our thinking further. There are issues here, though. AI is “trained” using the creativity of others and its content often lacks attribution. Creative artists are concerned that their work is being hijacked, or that the tools can produce output that renders many creative roles obsolete. There is also a very real fear that the convenience of AI may make us lazy. 

Language barriers disappear: AI is the new Babel fish, a fictional species in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy that is inserted into the ear and translates any language into the first language of the wearer. I delivered a session to international students recently as part of their induction week. One table had a tablet that was translating everything I said, as I said it. Last year those students would likely not have understood me, requiring repetition and delaying delivery; this time they were immediately up to speed with what I said and the tasks at hand. 

But who is checking the accuracy of the translations? With any translation there is potentially inherent bias, whether intentional or not. Much has been written about bias in AI, and we could be at risk of further entrenching it by relying on these tools.

Humans can relinquish control but not responsibility: when we allow AI to produce content for us, we relinquish control and the ideas cannot be called our own. However, when that piece of work is disseminated, in whatever way, we are still responsible for its production. We, as human “authors”, must be responsible for what is published in our name. 

Historical definitions of plagiarism no longer apply: as discussed above, plagiarism was a concept for its time, but that time is past. We need to develop a new way to assess our students’ work. The use of AI tools can be seen on a continuum, from “no AI input” through to “entirely AI generated”. While as a community we try to establish where ethical use sits on that continuum, the world moves on, and AI is everywhere. 

Attribution remains important: in the post-plagiarism era, we need to support students to both develop and demonstrate their academic integrity. Attribution is a key element of that process. GenAI is another source, and as such must be properly attributed. A declaration form can help with this.

Implications for assessment

Students will not simply wait for us to come up with an “acceptable” way to integrate AI into their work. Just as some of our colleagues, and many in other industries, are doing, they will already be using AI tools. 

When we attempt to “police” AI use, and punish students for its detected use, we are punishing the students we are committed to supporting; the underrepresented students, those with less disposable income to subscribe to the best tools and those whose language skills do not allow them to edit the AI output to make it less recognisable. A punitive strategy is not the way forward.  

How can we assess in a post-plagiarism era?

David Carless suggests that there are three purposes of assessment:

To support student learning

  • By working with students as partners to develop assessment rubrics and to self- and peer-assess, we are supporting learning in both the topic and assessment process.
  • Use AI-generated responses as a learning tool. Academics can input their assessment task to the GenAI tool during a session and then share the output with students for critique. 
  • Assess the process, not the product. When we come to accept that hybrid work will be the norm, it becomes important to find a way to assess a student’s thought process: the steps they have taken to direct the tool they have used.

To judge the quality of student learning

Develop assessment that supports academic integrity. This can be done by:

  • Requiring drafts of work with reflections throughout the learning process.
  • Using portfolios where students demonstrate their learning journeys.
  • Performance-based assessments such as presentations, hand-crafted posters, artwork, musical performances, prototype building, etc.

To satisfy the demands of accountability

We need to be able to demonstrate that the assessment we deploy meets the intended learning outcomes of our module or programme, and that we have incorporated measures to ensure academic rigour. 

Karen Kenny is a senior educator developer at the University of Exeter. 

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site