Universities must compel students to detail how they use AI in assignments

Institution-wide policies must affirm the necessity of engaging critically with AI tools in ways that foster human agency, writes Joseph Moxley

一月 13, 2025
Illustration of man lifting cage from robot with university building in background. To illustrate disagreement in universities about whether AI tools should be prohibited, permitted or encouraged
Source: iStock montage

Colleges and universities’ failure to create institution-wide policies that define acceptable and unacceptable uses of AI places students at reputational risk: it may cause students to be accused of academic dishonesty, and it is likely to leave students underprepared for the workplace.

A recent study led by Hui Wang at the University of Arizona found that out of the 100 top US universities, more than one-third had unclear or undecided policies on AI use and more than half left decisions to individual instructors.

Leaving it to faculty makes some sense, not least because it could be seen to be demanded by academic freedom – which the American Association of University Professors (AAUP) defines as “the right of the faculty to select the materials, determine the approach to the subject, make the assignments, and assess student academic performance in teaching activities for which faculty members are individually responsible”.

Yet faculty are deeply divided regarding whether use of AI constitutes academic dishonesty. Lance Eaton, director of faculty development at College Unbound, has collected 164 faculty members’ AI-policy statements from institutions across the globe. Eaton’s corpus demonstrates widespread disagreement about whether AI tools should be prohibited, permitted or encouraged. Some faculty – particularly in STEM and business – permit AI unconditionally, others allow its use only for specific tasks, such as research or editing. Humanities faculty tend to prohibit AI-assisted writing entirely, considering it unethical and contrary to academic integrity policies.

Facing multiple and often conflicting guidelines, students may be unsure of when and how they can ethically incorporate AI tools into coursework. In turn, faculty may be unsure of when and how AI can be introduced into the curriculum, or how they can use it for grading, teaching and scholarship.

Nor is much assistance offered by the guidelines produced by the American Psychological Association (APA) and the Modern Language Association (MLA) for citing AI usage. They don’t account for the extent of human labour involved or the varying degrees of AI assistance, making it challenging to accurately attribute authorship or assess the integrity of the work.

Moreover, by requiring writers to cite every phrase generated by AI, they both fail to address the realities of how writers use AI today. For instance, a single phrase – a metaphor, a hypothesis, the gist of an argument – may emerge from interactions with multiple AI tools, such as Elicit, Consensus, Perplexity, Inciteful, or LitMaps, which assist in searching, visualising relationships between academic papers and identifying key texts and influences. Writers may develop the phrase further by using NotebookLM to generate podcasts on foundational readings. Expecting writers to cite what might be a dozen AI tools is impractical. Moreover, the MLA’s additional requirement to list the prompts used to generate the phrase could result in a two-page article being followed by 20 pages of prompts.

In its recently published position statement, “Building a Culture for Generative AI Literacy in College Language, Literature, and Writing”, the MLA’s joint task force with the Conference on College Composition and Communication (CCCC) argues that first-year writing courses “have a special responsibility to teach students how to use AI critically and effectively in academic situations and across their literate lives”. But by assigning this responsibility primarily to first-year writing courses, the guidelines inadvertently marginalise AI literacy.

Illustration of student holding map with arrows in various directions and a background of a lecture theatre and circuit board. To illustrate the difficulties faced by students in engaging critically with AI tools
Source: 
iStock montage

These failures by universities’ and professional associations undermine the missions of modern research universities: to prepare students with the literacy competencies they need to prosper in a workplace that is being transformed by AI. According to Microsoft’s 2024 Work Trend Index, based on a survey of 31,000 workers across 31 countries, 75 per cent of knowledge workers now use AI at work, nearly doubling in the past six months.

Moreover, students themselves have quickly embraced AI. In a 2024 survey by the Digital Education Council of nearly 4,000 students across 16 countries, 86 per cent reported using AI for academic purposes. But a remarkable 80 per cent felt their universities’ AI integration into the curriculum didn’t meet their expectations, and 72 per cent felt their universities should provide more AI training.

I share my colleagues’ concerns about AI. It seems unethical to me that OpenAI and other companies have absorbed vast amounts of internet content – including copyrighted material – to train their AI models without permission or compensation to the original creators. As an author of articles at Writing Commons – an open-education project and encyclopedia for writers – I’m upset that my work was scraped without my consent. It took me decades to write those articles. Likewise, I don’t believe it’s ethical for academic publishers like Taylor & Francis to sell faculty members’ scholarship without obtaining our permission.

I’m fearful about the environmental impact of AI systems, particularly their contributions to global warming and water consumption. I worry about the nuclear power plants that Google, Amazon and other megatechs are investing in to run mammoth data centres.

I worry that AI will limit human agency. It troubles me that most technology experts surveyed by researchers from Elon University believe AI will eliminate critical thinking; reading and decision-making abilities; and healthy, in-person connectedness, leading to more mental health problems.

And yet here we are.

As of this year, GPT-4 can write as well as a smart high school student, scoring in the 93rd percentile in the SAT Evidence-Based Reading and Writing test. Recently, OpenAI o1, a new AI model the company claims can reason its way through complex tasks, scored a 124 on the Norway Mensa IQ test, which places it securely in the “above average or bright” category of human intelligence.

We cannot ignore these dramatic changes to meaning-making and literacy practices. Teaching AI literacy today is akin to teaching reading and writing in the era following the invention of the printing press.

It’s understandable, though, that teachers worry that AI-assisted writing may undermine students’ writing and critical thinking competencies. In fact, these outcomes seem likely if students interact with AI systems as passive consumers, simply offering up regurgitated content for assignments without genuine engagement.

To address these concerns, university-wide AI policies must affirm the necessity of creating an environment where students and faculty engage critically with AI tools in ways that foster human agency. These policies must affirm that writers can best develop their reasoning and improve their communications by engaging in internal dialogues with themselves about what they want to say and how they need to say it – dialogues enriched by internalising feedback from AI tools, alongside feedback from teachers, peers, clients or others. Viewed from this perspective, AI is a tool, not a replacement for human writing.

To preserve academic freedom, university AI policies should permit faculty to reject AI-assisted writing. Just as some photographers still prefer analogue film to digital files, some teachers may never want to engage with AI-assisted writing. But while universities should not require faculty to teach critical AI literacy, they should encourage faculty and students to experiment and research ways AI tools can be used to facilitate critical thinking, composition and human agency.

Illustration of student and robot holding brain in classroom. To illustrate how universities should encourage faculty and students to experiment and research ways AI tools can be used to facilitate critical thinking and human agency.
Source: 
iStock montage

To preserve academic integrity and accurately measure student effort, university AI policies should require students to attach a footnote to their coursework that elaborates on how they used AI: as a research assistant for gathering and synthesising sources, for instance. Or as a composition assistant for prewriting, drafting or organising. Or as an editor for polishing prose, conforming to standard written English or ensuring proper referencing.

Additionally, university AI policies should require students to archive chat logs associated with their coursework. If they wished, teachers could then review these logs to assess whether students engaged critically and thoughtfully with AI tools. Credit should only be awarded for AI-assisted submissions when the AI-footnote or chat log(s) demonstrate that students thoroughly reviewed and refined the AI-generated content, showing meaningful interaction with the tools and oversight of every word. Strict penalties, including course failure, should be enforced for submissions that show no evidence of human engagement, such as uncritically accepted hallucinated references or formulaic prose.

The bottom line is that if we keep acting as if old-school conceptions of authorship, composing and academic integrity still apply, we risk surrendering our agency and creativity to machines. It’s time to look up and fight for human agency and creativity. Writing has changed, and so must we.

Joseph M. Moxley is professor of English at the University of South Florida and a specialist in rhetoric and technology.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

相关文章

ADVERTISEMENT