AI poses threats to education, ethics and eureka moments

The sudden rise of generative AI offers an opportunity for reflection and renewal of our scholarly values, say Ella McPherson and Matei Candea

三月 19, 2024
Illustration: Archimedes unveils a circuit board from behind a curtain
Source: Getty Images/iStock montage

Generative artificial intelligence is increasingly pervasive in higher education, embedded in everyday applications and already forming part of staff and student workflows.

Yet doubts remain manifold. On the teaching side, colleagues are concerned with the technology’s potential to enable plagiarism while also being excited about the prospect of its doing lower-level work, such as expediting basic computer coding, that makes space for more advanced thinking. On the research side, various techno-solutions are being pushed that are meant to speed up crucial processes, such as summarising reading, writing literature reviews, conducting thematic analysis, visualising data, doing referencing and even peer reviewing.

Saving time is all well and good. But efficiency is rarely the core norm driving scholarship. The only way to know if and how to adopt generative AI into our teaching and research is through openly deliberating about its impact on our main values with colleagues and students. Those values must lead and shape the technology we use, not the other way around.

Academic excellence is often posited as the core value of scholarship. The use of generative AI, where it facilitates knowledge generation, can be in line with this core value, but only if it doesn’t jeopardise our other scholarly values. As students and scholars, understanding how scholarship is produced is just as important as knowing what has been produced, but the adoption of generative AI takes away that understanding. Learning-by-doing is a pedagogical approach that applies just as much to the student as to the established scholar. It is often slow, discombobulating, full of mistakes and inefficiencies, yet it is imperative for creating new scholarship and new generations of scholars.


AI transformers like ChatGPT are here, so what next?


To live in a world full of AI, our students also need to learn to do without it. We need to ensure everyone understands the key skills underpinning scholarship. This means that zero-AI assessments (such as invigilated exams) are likely to remain a core part of student assessment.

The initial enchantment of generative AI has also distracted us from the complex ethical considerations around its use. We are ever more aware, for instance, that many large language models have been trained, without permission or credit, on the works of many knowledge sectors, including the academy. Given our cultural norm of citation – acknowledging the ideas of others, showing how ideas are connected and elaborating the context of our writing – it is uncomfortably close to hypocritical to rely on research and writing tools that do not reference the works on which they are built.

Then there is the sustainability issue. A typical conversation with ChatGPT, with 10 to 50 exchanges, requires a half-litre of water to cool the servers, while asking a large generative AI model to create an image requires as much energy as fully charging your smartphone’s battery. Such environmental consequences should give us pause when we could do the same tasks ourselves.

Research ethics are about representing the world well, with empathy, intellectual honesty and transparency. Generative AI complicates all of these.

Empathy is often created through proximity to our subjects and stakeholders. Generative AI, as the machine in the middle, disrupts that process. Moreover, its black-box nature means we cannot know exactly how it gets to the patterns it identifies in data or to the claims it makes in writing – not to mention that these might be hallucinations. Generative AI may be trained on elite datasets and thus exclude minoritised ideas and reproduce hierarchies of knowledge, as well as biases inherent in this data.

Research integrity means honesty about how and when we use generative AI, and scholarly institutions are developing model statements and rubrics for AI acknowledgements. Still, a fuller consideration of research ethics raises questions about how harms may be perpetuated by its use.

Nor should we neglect the effect of AI use on the pleasure we get from research. As academics, we don’t talk enough about this, but our feelings animate much of what we do; they are the reward of the job. Of course, research can be deeply frustrating. But think of the moment when a beautiful mess of qualitative data swirls into a theory, or the instant in the lab when it becomes clear the data is confirming the hypothesis, or when a prototype built to solve a problem works. These data eurekas are followed by writing eurekas: the satisfaction of working out an argument through writing it out, the thrill of a sentence that describes the empirical world just so, the nerdy pride of wordplay. Generative AI use risks depriving us of these emotions, confining our contributions to the narrower, more automatic work of checking and editing.

Where the line is drawn on AI’s involvement in teaching and research will no doubt depend on different disciplinary traditions, professional cultures and modes of teaching and learning, so departments and faculties need autonomy to decide which uses of AI are acceptable to them and which are not. To that end, all of them must take up the opportunities created by generative AI’s emergence to reflect on and renew our academic values – including education, ethics and eurekas. Those are the best measure for making these vital decisions.

Ella McPherson is associate professor of sociology at the University of Cambridge and deputy head and director of education at the university’s School of the Humanities and Social Sciences (SHSS). Matei Candea is professor of social anthropology at Cambridge and academic project director for technology and teaching at SHSS. This article is based on their Manifesto and Principles on AI and Scholarship, prepared with support from Megan Capon. 

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (3)

Generally good article - the point on the value of scholarship and eureka moments are well made, the opening gambit that academia is "excited" about AI is unwarranted. The problem with reducing "lower-level" work, is it os often unclear where low-level work stops and more critical work begins. You mention visualising data - however the choice of visualisation depends critically on understanding the data and your reason for visualising it. note the word "understanding" the key part AI lacks.The ethics of where where AI gets its data is raised, but you miss the elephant the room "trust". It is not just where the data came from but how and why the AI joined specific bits of data. LLMs are black boxes that cannot provide an audit trail or explanation of what was done and why, so I see no reason to trust the output of the box. Indeed it is not cleat that the result is reproducible in any sense. I see no reason to get excited about an unreliable, untrustworthy tool, use of which could cost more time than it saves plus cause reputational damage Academia is supposed to be concerned about kowledge and thought, when it comes too AI there is a distinct lack of either
Generative AI (gAI) is a tool. We need to learn (and teach) how to use it correctly and when it is appropriate to use it at all. This includes the intelligent selection of prompts for the gAI and - even more vital - critical analysis of what it produces, then reasoned choices as to what if any of the output we want to include in our work. Students need to 'show their working' by including prompts used and analysis of output in any piece of work where they want to utilise gAI. A final year undergraduate computer science student asked me just today about using gAI to assist with code. I suggested comparing the gAI code with what they'd written (which goes already) and decide which was better... and talk about it in the report that accompanies the code they were working on. We don't complain about students using spell check or even Grammerly, I don't think gAI should be chucked out with the bathwater either.
AI and any or all of its components--which must be distinguished for each other--used well has absolutely NO RELATIONSHIP to ethics and--come on, now;; "Eureka moments." It is 2024 not 1924. Education will benefit from proper adoption, which must be both exemplified AND taught. The same reaction to cave painting, early alphabets, printing, telegraph, typewriter, radio, TV, computers.... Please!
ADVERTISEMENT