Logo

Open dialogue about AI in HE is the way forward

The assumption that instructors hold negative views on the use of any generative AI means that important conversations aren’t being had

Florian Stoeckel's avatar
15 Oct 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
A student and teacher speaking
image credit: Valeriy_G/iStock.

Created in partnership with

Created in partnership with

University of Exeter

You may also like

ChatGPT and the rise of AI writers: how should higher education respond?
5 minute read
Image showing a human and AI writing together

The higher education sector seems to be catching up with the reality that many, if not most, students are using generative AI tools. A trend at some universities and departments feels like a rather radical change: AI-supported assessments are becoming the norm, while assessments that prohibit the use of AI are becoming rarer and now require justification. I find this shift encouraging because AI use among students was already widespread, yet at the same time, many of my students have told me they felt uncomfortable even discussing it with instructors. One reason for this is the assumption that instructors hold negative views on the use of any generative AI. However, this situation means that neither the ethical nor the unethical use of AI is being discussed, and, in my experience, using AI effectively is not a common classroom topic.

We are at a point where we can think about leveraging AI most effectively in higher education. The focus could now be more often on how this can be achieved. In this context, we should discuss ethical versus unethical use and situations where using AI might negatively affect learning outcomes. Faculty opinions on AI vary widely, just like in the population at large. Some embrace it fully, while others worry about its potential negative impact on learning. I believe this diversity of perspectives is valuable. Why shouldn’t students be exposed to different viewpoints and encouraged to form their own opinions? What I find important, however, is that students get to choose on their own and also get much practice to prepare for a job market that will require them to be proficient users of AI.

New institutional policies coming into effect this academic year that will allow much more AI usage in many cases are a big step forward, but they still often seem disconnected from how students are actually using AI and how they could benefit from it. For example, at Exeter, getting feedback on English language usage from AI is permitted, but using AI to improve a text and copying it verbatim (without putting it in quotation marks) is not. As a non-native English speaker myself, I find that particular use incredibly helpful. 

Also, in many situations, prompts and outputs have to be documented in great detail. While there are good reasons for this expectation, it creates significant challenges for students to create this documentation and for instructors to process it. I am also concerned that instructors’ views on AI might unconsciously affect how student work is assessed, potentially leading to students who rely heavily on AI, though ethically and appropriately, being marked down.

Parts of the conversation also still focus heavily on detecting GenAI use that was prohibited or not declared according to university rules. This is a tricky issue, with views ranging from it being impossible to reliably detect what was written by GenAI, to the belief that some software might be able to track it. Most simple solutions do not give a (legally) reliable assessment of whether something was written by GenAI, risking a rather high rate of false positives (ie, work being flagged as created by AI without this having been the case).

More importantly, discussions and faculty time spent on policing efforts are perhaps not being used for discussions about leveraging AI most effectively in an ethical way, or for critical reflections on problems of AI use, such as biases, energy consumption, property rights and so on. While I understand that fairness considerations require policing policies to be in place, we might benefit from a more lenient approach that puts students at the helm: that is, we begin the discussion on those key problems, but let them decide much on the way they want to approach using AI. This includes asking for instance: What is the issue with relying on GenAI, which does not reference the people whose ideas it offers? How do we risk reproducing stereotypes, and what are the consequences for (global) energy consumption and sustainability goals?

A major concern is the accuracy, or rather, inaccuracy, of GenAI output. While this is a very important issue, inaccuracy is perhaps a problem more often when GenAI is used for applications it is not good at (yet!). For those refining grammar and sentence structure, AI tools are highly effective. In these cases, accuracy isn’t a significant concern, as AI excels at language rules. Since it’s improving clarity rather than generating new content, there’s little room for “hallucinations”. When using AI to brainstorm or develop a framework for a topic, accuracy takes a back seat as well. AI usually offers valuable starting points that, while not always complete, are rarely factually incorrect.

AI can also help with data analysis, such as assisting with syntax, and analyses can for instance be run using ChatGPT’s built-in data analysis module. While caution is warranted when users know little about data analysis, accuracy is not much of a problem here either. The main concern arises when AI is tasked with providing expert-level knowledge, citations (eg, for a literature review) or any kind of specific information. AI still struggles in this area. While newer models (OpenAI’s o1-preview) and tools such as Perplexity provide citations and are a step forward, they often still lack the breadth and depth we might want to see in advanced student work. The tools might, of course, catch up quickly.

Lastly, I’m deeply concerned about the potential inequality that a shift to allowing or even encouraging GenAI might create. Some students aren’t using AI or aren’t comfortable trying it. Without proper training, we risk widening the gap between these students and their peers who are more adept with AI tools. In fact, allowing students to use AI could create pressure for everyone to rely on it in order to avoid falling behind. Additionally, paid AI options offer more features than free ones. If universities are going to encourage the use of AI, they should consider providing access to premium tools for all students, much like they do with MS Office or data analysis software licences.

Florian Stoeckel is an associate professor in political science at the University of Exeter.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site