ChatGPT: tool or terminator?

OpenAI’s chatbot has wowed the world by producing astonishingly well-formed written responses to questions. Is it about to turn academia upside down?

January 19, 2023
Source: Getty

What is your hot take on ChatGPT, the chatbot that has stunned users with its ability to instantly answer questions with original and highly plausible AI-generated content?

And can it still be considered hot when every possible take has been reheated numerous times in the past few weeks?

Perhaps a more pertinent question is whether you need a take at all, when a coherently argued 500 words to suit almost any angle can be generated at the click of a button by ChatGPT itself.

Want to know how ChatGPT can be implemented in current academic practice? Just ask ChatGPT.

ADVERTISEMENT

If this sounds facetious, it is not intended to be – the AI tool is, at least superficially, astonishingly good at what it does. The corresponding question, then, is what it doesn’t do – and how good it is at going beyond the superficial.

The temptation with all super-hyped technological developments (think Moocs in 2012) is to leap straight to the conclusion that it means imminent redundancy for existing ways of doing things.

ADVERTISEMENT

A colleague and his nine-year-old daughter asked ChatGPT to write them a poem about cowpats, for example, and found its offering excellent: no need, then, for fathers and daughters writing fun poems together – there’s now a chatbot for that. Except it doesn’t take much intelligence, artificial or otherwise, to see that this is not a very convincing conclusion.

I saw another example of an AI tool being tested as an automated respondent to people contacting a mental health crisis support line. The ethics of this test notwithstanding, it reported that people in distress found the AI-generated responses to be as helpful as human responses, right up until the moment when it was revealed that they were talking to an algorithm – at which point, assurances such as “I’m listening” and “I understand how you’re feeling” fell understandably flat.

A more run-of-the-mill example of ChatGPT’s utility is to be found when it is asked to do simple research-and-write tasks, such as “Tell me more about the trends expected to shape higher education in 2023” (an example posted on LinkedIn).

Its response, in 500 words or so, was a perfectly adequate, if high-level, round-up of what you would expect any reasonably informed person to come up with – a run through such trends as “increased use of online and blended learning” and “emphasis on work-based learning”.

What it definitively was not was a genuinely insightful or analytical piece of writing – it offered up “greater emphasis on internationalisation”, for example, without any nuance or caveat, no inkling that perhaps the state of international higher education today is rather more fraught than it was five years ago, or any exploration of the likelihood or implications of worsening geopolitical tensions over the course of the year.

ADVERTISEMENT

What it was able to do, though, was offer up a good overview in seconds – which, as the person posting on LinkedIn said, was far quicker than an individual could have written a comparable piece, and as such demonstrated its utility as an extension of internet search or virtual assistants in enhancing productivity.

This adds to the sense that once you cut through the “wow” factor of a machine churning out such plausible simulations of human writing, you are left with something that is more tool than terminator, for now at least.

Michael Webb, Jisc’s director of technology and analytics, addressed this point in a piece discussing whether ChatGPT spelled the end for the essay, in which he argued that far from banning AI tools, universities and assessors “should really regard them as simply the next step up from spelling or grammar checkers: technology that can make everyone’s life easier”.

ADVERTISEMENT

Others, though, do see a greater threat than that, and in our opinion pages this week, we hear from computer science scholar Andy Farnell, who argues that the question academics should really be asking themselves is: “If you cannot tell a machine from a genuine student, what makes you think a student cares whether they’re taught by you or a machine?”

Rather than getting bogged down in conservative v progressive debates about how precisely the use of ChatGPT should be sanctioned in academic settings, this should be a moment for academics to “reclaim the ground freed up by machines” – namely, moving on from delivering a homogenised intellectual diet to students who, if they want middle-of-the-road, will be able to get it from an app.

A similar point, and perhaps the pithiest hot take I have seen on ChatGPT, was offered on Twitter:

“I think we have seriously misconstrued the point of education, and seriously misaligned our methods of pursuing it, if a bot that writes essays can even seem like a problem.”

ADVERTISEMENT

john.gill@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (1)

The emergence of AI and ChatGPT is inevitable evolution. It's only a threat to educational institutions if they don't evolve with it. This is an opportunity to final rid ourselves of traditional assessment formats, which disadvantaged many anyway (by punishing those who slow to develop skills to write academically) and were increasingly open to misconduct, with many lazily relying on Turn It In. Unfortunately, I strongly suspect that institutions will be very slow to react and even to respond on how staff should deal with it. If history tells us anything, it's that focusing on policy and punitive measures for academic misconduct was not an adequate solution to essay mills. I hope HEIs don't make the same mistake with AI. The difference here is that AI is going to be incredibly useful for studying and for employment, so hopefully that's recognised and quickly.

Sponsored

ADVERTISEMENT