I thought I could harness AI for the good, but I was wrong. Now what?

Encouraged to use ChatGPT to help them with the hard stuff, my students let it do all their thinking for them. Maybe I should give up, says Dan Sarofian-Butin

November 21, 2024
Three Thinkers Sitting In Front Of A Computer Screen to illustrate I thought I could harness AI for the good, but I was wrong. Now what?
Source: Getty images/Istock montage

“I feel like it could be cheating,” one of my college students wrote on the anonymous mid-semester survey when I asked about their AI use. “We aren’t really using our own knowledge, we are using the help of a tool to help us think in a way we are unable to [by ourselves].”

Thinking is hard. It’s especially hard regarding the complex and contested topics I am teaching about – issues of poverty, race, gender and ethics. But I remind my students that if these things were easy to think through, we’d have solved everything and the world would be all rainbows and butterflies.


Campus resources on AI in higher education


My students aren’t that good at thinking carefully. I, of course, don’t blame them; as a recent meta-analysis put it, “mental effort is inherently aversive”: in other words, people don’t like to do it. I simply try to push my students beyond their deeply internalised complacency to realise that critical and reflective thinking helps them understand their world better.

The key to this learning process, though, has very little to do with me. Until my students put in the work of thinking, nothing I do – give the best lecture in the world; assign the most profound reading; bring in the coolest guest speaker – will sink in. This, by the way, is why teachers have given tests and assigned papers for the past hundred years: we assumed that students’ work indicated their level of thinking and, therefore, their level of learning.

ADVERTISEMENT

Which brings me, of course, to ChatGPT.

The reality today is that my student’s response is an outlier in its unease about AI use. Half of my students use some form of AI in most of their classes – and 80 per cent tell me that their professors have no clue they’re doing this. The evidence – in anecdotal musings, universities’ own data, and peer-reviewed research – is pretty clear that we are in the midst of a cheating tsunami.

For a little while, I thought I could ride the wave. As I wrote in Times Higher Education and elsewhere, I embraced the use of AI in my classroom and urged my colleagues to do the same. I saw how powerful the technology could be as a personalised, real-time, and adaptive tutor and mentor, helping students by “cognitive offloading” the hard stuff so they could make incremental progress.

ADVERTISEMENT

But I am here to tell you, dear reader, that I have come to accept that it’s a losing battle.

What began during the pandemic as lowered academic standards has been turbo-charged by AI into what I think of as full-scale “cognitive outsourcing”. Students can and do just press a button and instantaneously receive a finished paper for just about anything. And the reality is that I can’t “AI-proof” my assignments. If you don’t believe me, just play with Grammarly Pro or the “canvas” feature in ChatGPT for a few minutes. These tools instantaneously transform students’ jumbled writing into clear, articulate and accurate prose.

I still stand in my college classroom and fight the good fight. I show my students how to use AI correctly and demand that their thinking be mirrored in their writing, no matter how imprecise or unfinished it is. Writing is thinking, I tell them. Using AI doesn’t just short-circuit the thinking process; it shuts off the entire master switch.

I really don’t want to sound like Chicken Little. But “cognitive automation”, says one study published last year, “exacerbates the erosion of human skill and expertise”. Another study published in June is even grimmer: “While technology enables streamlining of some cognitive tasks, reliance upon it appears to be actively eroding key markers of complex human cognition over time.” The sky really is falling.

ADVERTISEMENT

Maybe I should give up completely. Why, I wonder, should I be the last old-school professor standing? Why should I hold the line, demanding my students improve their thinking when it’s clear by now that they will be able to use ever-more-powerful versions of generative AI in all their other classes and in their future workplace to make up for their lack of knowledge and critical capacities? Maybe thinking is overrated?

Let me be brutally honest. I thought I knew what my job was. But I’m kind of unable to think it through by myself any more.

Dan Sarofian-Butin is a professor of education at Merrimack College, Massachusetts.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (5)

I’ve heard it said many times that people avoid doing hard things. Henry Ford is supposed to have claimed that thinking was the hardest work and this explained why so few people could be persuaded to do it. For me, Adam Smith, writing in 1776, said it best. ‘ The man whose whole life is spent in performing a few simple operations, of which the effects are perhaps always the same, or very nearly the same, has no occasion to exert his understanding or to exercise his invention in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become’. Higher education has to be challenging because it necessarily involves the mental effort to wrestle with ideas that are at the edge (or just beyond) our understanding. If generative AI tools remove the need for such effort, we should not expect rational individuals to forgo them.
A very insightful perspective. The last sentence about rationality in particular rings true, unless / until students grasp that the real benefits of HE reside in learning rather than accreditation. The need to do this is existential for our sector, though it's difficult to see this being achieved while academic incentives are so heavily oriented towards research rather than teaching.
Here is a radical thought. The focus of education should not be on assessments. This is the old way. You enable people to think, do independent research, evaluate evidence without bias, and equip them with the skills to solve problems as relevant to the various disciplines. The job market or whatever comes after a university education can assess whether someone can perform. Universities should focus less on assessing and focus more on actual educating. You should use anything and everything, including AI if it helps with your education.
@acerpacer I think most would agree that we should move away from universities as acreditation factories. But that doesn't mean that assessment has no place. I'm a runner, and I train much harder if I have a race comming up than if I don't. I'm also a dancer, and again, I practice more when I have a big dance weekend away coming up. Just so with education. Having some kind of specific goal to aim towards can help students. Otherwise we can make eduction about all the things you say, as much as we like, but without something to motivate students, most would choose to play XBox, or go drinking than do even work they are interested in. I know that I would have. Much as I love my subject, if at 19 you'd have said "read this article or put it off to tomoorw and go to the pub", I can tell that unless I had the motifvator of deadlines, I'd have never left the pub. AI can help you with your education. But it can also do your education for you, leaving you with no benefit from having been there.
@acerpacer, that's impossible. If we think removing hard thinking (in the form of building skills for one's own original writing or equation -solving) from higher education is going to cause havoc, imagine removing ALL need for students to demonstrate proficiency. that is the purpose of the assessment- so they can show you what they can do/know. Getting rid of that won't allow us to ever know if students are ready for the workforce. Would you hire an engineer to build a bridge who has never demonstrated they can measure properly? of course not. So, back to the drawing board.

Sponsored

ADVERTISEMENT