Generative artificial intelligence is increasingly pervasive in higher education, embedded in everyday applications and already forming part of staff and student workflows.
Yet doubts remain manifold. On the teaching side, colleagues are concerned with the technology’s potential to enable plagiarism while also being excited about the prospect of its doing lower-level work, such as expediting basic computer coding, that makes space for more advanced thinking. On the research side, various techno-solutions are being pushed that are meant to speed up crucial processes, such as summarising reading, writing literature reviews, conducting thematic analysis, visualising data, doing referencing and even peer reviewing.
Saving time is all well and good. But efficiency is rarely the core norm driving scholarship. The only way to know if and how to adopt generative AI into our teaching and research is through openly deliberating about its impact on our main values with colleagues and students. Those values must lead and shape the technology we use, not the other way around.
Academic excellence is often posited as the core value of scholarship. The use of generative AI, where it facilitates knowledge generation, can be in line with this core value, but only if it doesn’t jeopardise our other scholarly values. As students and scholars, understanding how scholarship is produced is just as important as knowing what has been produced, but the adoption of generative AI takes away that understanding. Learning-by-doing is a pedagogical approach that applies just as much to the student as to the established scholar. It is often slow, discombobulating, full of mistakes and inefficiencies, yet it is imperative for creating new scholarship and new generations of scholars.
AI transformers like ChatGPT are here, so what next?
To live in a world full of AI, our students also need to learn to do without it. We need to ensure everyone understands the key skills underpinning scholarship. This means that zero-AI assessments (such as invigilated exams) are likely to remain a core part of student assessment.
The initial enchantment of generative AI has also distracted us from the complex ethical considerations around its use. We are ever more aware, for instance, that many large language models have been trained, without permission or credit, on the works of many knowledge sectors, including the academy. Given our cultural norm of citation – acknowledging the ideas of others, showing how ideas are connected and elaborating the context of our writing – it is uncomfortably close to hypocritical to rely on research and writing tools that do not reference the works on which they are built.
Then there is the sustainability issue. A typical conversation with ChatGPT, with 10 to 50 exchanges, requires a half-litre of water to cool the servers, while asking a large generative AI model to create an image requires as much energy as fully charging your smartphone’s battery. Such environmental consequences should give us pause when we could do the same tasks ourselves.
Research ethics are about representing the world well, with empathy, intellectual honesty and transparency. Generative AI complicates all of these.
Empathy is often created through proximity to our subjects and stakeholders. Generative AI, as the machine in the middle, disrupts that process. Moreover, its black-box nature means we cannot know exactly how it gets to the patterns it identifies in data or to the claims it makes in writing – not to mention that these might be hallucinations. Generative AI may be trained on elite datasets and thus exclude minoritised ideas and reproduce hierarchies of knowledge, as well as biases inherent in this data.
Research integrity means honesty about how and when we use generative AI, and scholarly institutions are developing model statements and rubrics for AI acknowledgements. Still, a fuller consideration of research ethics raises questions about how harms may be perpetuated by its use.
Nor should we neglect the effect of AI use on the pleasure we get from research. As academics, we don’t talk enough about this, but our feelings animate much of what we do; they are the reward of the job. Of course, research can be deeply frustrating. But think of the moment when a beautiful mess of qualitative data swirls into a theory, or the instant in the lab when it becomes clear the data is confirming the hypothesis, or when a prototype built to solve a problem works. These data eurekas are followed by writing eurekas: the satisfaction of working out an argument through writing it out, the thrill of a sentence that describes the empirical world just so, the nerdy pride of wordplay. Generative AI use risks depriving us of these emotions, confining our contributions to the narrower, more automatic work of checking and editing.
Where the line is drawn on AI’s involvement in teaching and research will no doubt depend on different disciplinary traditions, professional cultures and modes of teaching and learning, so departments and faculties need autonomy to decide which uses of AI are acceptable to them and which are not. To that end, all of them must take up the opportunities created by generative AI’s emergence to reflect on and renew our academic values – including education, ethics and eurekas. Those are the best measure for making these vital decisions.
Ella McPherson is associate professor of sociology at the University of Cambridge and deputy head and director of education at the university’s School of the Humanities and Social Sciences (SHSS). Matei Candea is professor of social anthropology at Cambridge and academic project director for technology and teaching at SHSS. This article is based on their Manifesto and Principles on AI and Scholarship, prepared with support from Megan Capon.