I thought I could harness AI for the good, but I was wrong. Now what?

Encouraged to use ChatGPT to help them with the hard stuff, my students let it do all their thinking for them. Maybe I should give up, says Dan Sarofian-Butin

November 21, 2024
Three Thinkers Sitting In Front Of A Computer Screen to illustrate I thought I could harness AI for the good, but I was wrong. Now what?
Source: Getty images/Istock montage

“I feel like it could be cheating,” one of my college students wrote on the anonymous mid-semester survey when I asked about their AI use. “We aren’t really using our own knowledge, we are using the help of a tool to help us think in a way we are unable to [by ourselves].”

Thinking is hard. It’s especially hard regarding the complex and contested topics I am teaching about – issues of poverty, race, gender and ethics. But I remind my students that if these things were easy to think through, we’d have solved everything and the world would be all rainbows and butterflies.


Campus resources on AI in higher education


My students aren’t that good at thinking carefully. I, of course, don’t blame them; as a recent meta-analysis put it, “mental effort is inherently aversive”: in other words, people don’t like to do it. I simply try to push my students beyond their deeply internalised complacency to realise that critical and reflective thinking helps them understand their world better.

The key to this learning process, though, has very little to do with me. Until my students put in the work of thinking, nothing I do – give the best lecture in the world; assign the most profound reading; bring in the coolest guest speaker – will sink in. This, by the way, is why teachers have given tests and assigned papers for the past hundred years: we assumed that students’ work indicated their level of thinking and, therefore, their level of learning.

ADVERTISEMENT

Which brings me, of course, to ChatGPT.

The reality today is that my student’s response is an outlier in its unease about AI use. Half of my students use some form of AI in most of their classes – and 80 per cent tell me that their professors have no clue they’re doing this. The evidence – in anecdotal musings, universities’ own data, and peer-reviewed research – is pretty clear that we are in the midst of a cheating tsunami.

For a little while, I thought I could ride the wave. As I wrote in Times Higher Education and elsewhere, I embraced the use of AI in my classroom and urged my colleagues to do the same. I saw how powerful the technology could be as a personalised, real-time, and adaptive tutor and mentor, helping students by “cognitive offloading” the hard stuff so they could make incremental progress.

ADVERTISEMENT

But I am here to tell you, dear reader, that I have come to accept that it’s a losing battle.

What began during the pandemic as lowered academic standards has been turbo-charged by AI into what I think of as full-scale “cognitive outsourcing”. Students can and do just press a button and instantaneously receive a finished paper for just about anything. And the reality is that I can’t “AI-proof” my assignments. If you don’t believe me, just play with Grammarly Pro or the “canvas” feature in ChatGPT for a few minutes. These tools instantaneously transform students’ jumbled writing into clear, articulate and accurate prose.

I still stand in my college classroom and fight the good fight. I show my students how to use AI correctly and demand that their thinking be mirrored in their writing, no matter how imprecise or unfinished it is. Writing is thinking, I tell them. Using AI doesn’t just short-circuit the thinking process; it shuts off the entire master switch.

I really don’t want to sound like Chicken Little. But “cognitive automation”, says one study published last year, “exacerbates the erosion of human skill and expertise”. Another study published in June is even grimmer: “While technology enables streamlining of some cognitive tasks, reliance upon it appears to be actively eroding key markers of complex human cognition over time.” The sky really is falling.

ADVERTISEMENT

Maybe I should give up completely. Why, I wonder, should I be the last old-school professor standing? Why should I hold the line, demanding my students improve their thinking when it’s clear by now that they will be able to use ever-more-powerful versions of generative AI in all their other classes and in their future workplace to make up for their lack of knowledge and critical capacities? Maybe thinking is overrated?

Let me be brutally honest. I thought I knew what my job was. But I’m kind of unable to think it through by myself any more.

Dan Sarofian-Butin is a professor of education at Merrimack College, Massachusetts.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (1)

new
I’ve heard it said many times that people avoid doing hard things. Henry Ford is supposed to have claimed that thinking was the hardest work and this explained why so few people could be persuaded to do it. For me, Adam Smith, writing in 1776, said it best. ‘ The man whose whole life is spent in performing a few simple operations, of which the effects are perhaps always the same, or very nearly the same, has no occasion to exert his understanding or to exercise his invention in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become’. Higher education has to be challenging because it necessarily involves the mental effort to wrestle with ideas that are at the edge (or just beyond) our understanding. If generative AI tools remove the need for such effort, we should not expect rational individuals to forgo them.

Sponsored

ADVERTISEMENT