
Voice, agency and style: what goes missing when AI chats back
We need to teach that imperfect but authentic writing is more valuable than sentences that are polished on the surface, argue three US academics. Here, they share surprise findings from STEM and beyond

Engineering students spend most of their time using highly structured mathematics to reach a numbered answer. So, for those like Patrick (an author of this article) – who learned English as a second language and then engineering to communicate using mathematics – it’s easy to think that following rules of spelling, grammar and paragraph construction will result in good writing. Just as following mathematical formulas will help you reach a technical solution. It’s also easy to feel that it would be wise to hide your shaky personal voice behind the more polished voice of artificial intelligence.
So, as teachers, we have a new and interesting job to show students what exactly goes missing when AI chats back. And this lesson is relevant far beyond the classroom.
An exercise with engineering students asks them to revise a rough sentence: “By having car suspension that changes the camber of the wheel during use, higher performance is possible.” Next the instructor asks the students: “What is automotive ‘high performance’?” Typically, students respond by talking about speed, handling and fuel economy. They can also see that the sentence needs fixing. Certainly, ZotGPT – University of California, Irvine’s (UCI) wrapper around ChatGPT – should be able to help students revise it. In this case, it gives us: “Improved performance can be achieved by using car suspension that adjusts the wheel’s camber during operation.” The subject of the sentence comes first, which helps. But what is “improved performance”? The generative AI has juggled the words syntactically but clarified nothing important. For that, we need a human who revises the sentence: “More road traction is generated when car suspension changes the camber as the vehicle turns.”
Students get it. They might be smarter than GenAI, after all.
- Assessing the GenAI process, not the output
- Using GenAI tools to refine EAP assessment
- The (AI) sky isn’t falling
So, what’s the big difference? It’s not better grammar or even clarifying the agent of an action. Our human can specify that a recognisable but ambiguous driving desire – high performance – must, in this case, mean better road traction. That’s what the camber adjustment delivers. GenAI will never be able to read the author’s mind nor figure out what they care about in a specific instance nor decide accurately what an audience needs to know – “high performance” can mean a few things depending on the context.
We can put this another way: voice, agency and style appear differently in the age of AI.
Sure, there are promising applications of GenAI in the classroom and for all sorts of professional writing jobs. Particularly promising is GenAI as a “thinking partner” that can, for instance, play the role of a skeptic when a writer is sharpening an argument. Also promising is GenAI as a trainable machine that can make evident patterns in a natural language corpus that might not be otherwise legible, or not as quickly. At the same time, GenAI capabilities will help us understand how real human intelligence works, decoupling it from the mere appearance of polished prose.
Human intelligence, meanwhile, will show up more prominently in terms of common sense and in a capacity to navigate the world, where writing is one form of intelligence. “We are used to the idea that people or entities that can express themselves, or manipulate language, are smart – but that’s not true,” says Yann LeCun, a New York University professor, senior Meta AI researcher and a 2019 Turing Award winner. “You can manipulate language and not be smart, and that’s basically what LLMs are demonstrating.”
But this lesson in human intelligence can be an especially hard sell if people have low confidence around their academic English.
Perhaps they have a point. Shouldn’t we, as writing instructors, be more open about English language learners using AI to level the playing field linguistically? Our interview with engineering student Mariana Jimenez suggests otherwise.
Mariana doesn’t think her English is perfect. Yet she compares AI output to her middle-school writing, when her goal was to “sound smart”. As she puts it: “In a sense, when you do more complex writing, your writing gets ‘worse’.” Less conventional writing reflects the idiosyncrasies of a person. To her, the “artificially simplified voice” of GenAI lacks real-world knowledge in the subject matter.
So “voice” isn’t about asserting the grammatical first person, and “style” isn’t about puffing up word strings. (In fact, these linguistic functions can also be prompted easily.) Situation sensitivity, including things like audience and exigency, is just as important for STEM writing as it is for any other type of writing.
Mariana has exceptional awareness of her English language use, writing and learning. Many multilingual students, however, struggle to move beyond the attitude that the strength of their academic writing in English depends most on using “the right words” and structuring sentences acceptably. The pressure to showcase advanced linguistic proficiency often leads multilingual students to favour AI-produced writing over their own, believing that ideas not presented in “perfect” academic English lack value.
We need to teach that imperfect but authentic writing is more valuable than sentences that are polished on the surface.
The lingo of writing studies might put it this way: from product, to process, to person. But now GenAI appears to do much of the writing process: with minimal human prompting, it can brainstorm, draft, revise and edit. So, new pedagogical insights and practices may emphasise how the writerly process is tied to the person in all their idiosyncrasies.
We are discovering that these lessons from engineering and academic English classrooms resonate all the way across campus. The era of formalised rubrics for writing assessment will likely shift due to the emergence of GenAI. At UCI, for instance, our upper-division rubric for writing assessment traditionally has four equally weighted categories: critical thinking and analysis, use of evidence/research, development and structure, and language and style convention. We will probably drop the word “convention” tied to style, then add a fifth category “voice” as the marker of distinct human intelligence.
And then beyond campus. In this article, we are making the case most immediately for the importance of the personal, the variable – even the erroneous – when it comes to technical and other sorts of writing pedagogy, which surprises most students and some instructors. But GenAI also highlights how the personal plays a crucial role in expert-level science writing and discourse; Stanford computer science PhD student Weixin Liang and co-authors make the case for the personal idiosyncrasies of peer review in contrast to the homogenising effects of GenAI (their study analyses a fourfold 2024 uptick in adjectives such as “commendable” and “innovative” in a large set of professional conference peer reviews).
Finally, we remember one of those massively attended happy-hour instructor meetings where, amid the GenAI panic of 2023, Patrick calmly raised his hand and said he wasn’t worried. Sure, there were plenty of neat things on the horizon, but since he doesn’t teach his engineering students product, or even process as conventionally understood, he didn’t see how GenAI would get in the way.
Patrick said he teaches “confidence”, which is the dispositional location for somebody’s voice, agency and style. Its span starts with students before their shared experience in a class, and it lasts long after. Now in the age of AI, we understand like never before the distinctly personal arc of writing, which is the only place confidence will ever appear, even when it comes to STEM.
Qian Du is an associate professor of teaching and former associate director of the programme in global languages and cultures; Daniel M. Gross is professor of English and campus writing and communication coordinator; and Patrick Hong is a lecturer in the Henry Samueli School of Engineering and a former executive editor for Road & Track magazine; all are at the University of California, Irvine.
If you’d like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.