Should we trust students in the age of generative AI?
A look at how institutions can shape their policies relating to generative AI such as ChatGPT to build trust among students and guide them in its responsible use
You may also like
Popular resources
With the arrival of generative artificial intelligence (AI) such as ChatGPT, should we trust or distrust students?
This is not the right question since there is no black-or-white answer. Posing such a question might lead to the use of AI detection tools whose effectiveness is, at best, limited. This can lead to unfair processes and decisions. It might push students into distress, as the story of Emily, described in Times Higher Education in July, shows. A first-year undergraduate student and aspiring lawyer, she was falsely accused of using ChatGPT to write an essay. The decision was taken based on results provided by the Turnitin detector, which claimed to detect 97 per cent of ChatGPT-authored writing. However, the details that led to the accusations of misconduct were not made available. Emily was shocked. She had not used ChatGPT or any other AI tool. She had to invest time and energy to fix a situation that created a lot of anxiety, until she could prove that she was right.
This story provides important lessons.
1. Detecting generative AI content is not simple or easy. Whatever their claims, AI detectors are not reliable enough, and there are questions over whether they ever will be when asked to distinguish between text created entirely by AI or simply refined or improved.
2. Relying merely on figures provided by AI detectors is not an appropriate process for checking work and might lead to poorly informed decisions. It is necessary to engage in dialogue with the student before making any decisions.
3. Detecting AI-authored content might not, in itself, indicate misconduct. For instance, students might have written a first draft, then asked ChatGPT to copy-edit the text to improve the style, grammar and vocabulary.
- Resource collection: AI transformers like ChatGPT are here, so what next?
- Eight ways to engage with AI writers in higher education
- Resource collection: AI and the university
Designing a policy based on trust and safeguards
For these reasons, at IÉSEG School of Management we have adopted a pragmatic approach to generative AI. Several professors worked with students and administrators to develop an adapted plagiarism policy for the age of AI. This policy explains what is and is not allowed when it comes to using AI tools such as ChatGPT. The main points are:
- Using AI tools to create content that is used verbatim, essentially a copy-and-paste, without any citations or declarations of human ownership, is strictly forbidden in all assignments submitted for academic credit. If there is any doubt, then the professor or the school’s administration will engage in a discussion with the student. If misconduct is suspected, this could lead to a disciplinary hearing.
- Professors can ask students to use AI tools for assignments. In such cases, students must expressly state the use, indicate the tool used and its version (for instance, ChatGPT 3.5 or 4, Midjourney 5 and so on), and indicate where it has been used in the document. Professors can request the prompts used.
- Students may use AI outputs as primary source material for certain assignments – for instance, if they are working on the capacities of large language models and want to study the responses of these models as part of their work. In this instance, they must cite the AI source in the same way as any other piece of evidence, such as an interview or survey, and provide their prompts.
- Students may use AI tools as collaborators or assistants to help them develop the initial sketches of assignments or finalise them. Examples might be using AI to find ideas, refine research questions, or for grammar or spelling checks. It can also be used for translation, helping non-native speakers in the language of their studies so they can focus on thinking and developing arguments rather than making sure their wording and grammar is fluent. In such cases, students must keep the earlier versions of their work to prove their authorship at any moment. They must also keep the history of their prompts to demonstrate how the AI tool has been used.
Guiding students to ethical and constructive use of generative AI
Any university policy on the use of these tools should continue to evolve in line with technological developments and the appropriation of these tools by students and faculty. But we also wish to avoid any such policy document becoming so big that no one will read it. Thus, listing all the situations where generative AI should and should not be allowed will be difficult, if not undesirable. And truthfully, right now, no one knows where all this will go. ChatGPT was released on 30 November, 2022 – less than one year ago – and generative AI keeps evolving daily.
That said, it is our mission as university educators to inform and guide students in the effective and ethical use of these tools. Thus, we must look beyond the context of formal assessments, as covered by our policy, to provide students with direction on how to get the most of these tools, to improve their learning and, hopefully, academic success. This means investigating:
- How students can use generative AI to generate personalised learning plans adapted to their own pace and way of learning.
- How students with specific learning issues such as dyslexia or attention deficit disorder can benefit from generative AI to learn and perform better.
- How students who struggle in specific courses can rely on generative AI to get a different perspective or presentation of the content that would help them learn.
These questions must be explored jointly and transparently with students to enable us to understand how they use these tools and how they think they can benefit from them without crossing the red line. Indeed, our goal is to help students learn better and not be tempted to use these tools inappropriately.
So, back to the initial question: should we trust or distrust our students when it comes to using generative AI? Trust must come with legitimate safeguards, which we have tried to implement while allowing for them to evolve. But trust must also be built jointly with students. This is why looking beyond the simple use or misuse of AI in formal assignments and guiding students in the many varied, complex and ethical applications of generative AI is essential and will contribute to building trust.
Loïc Plé is director of pedagogy and head of the Centre for Educational and Technological Innovation at IĒSEG School of Management.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.