Banning ChatGPT in unsupervised exams ‘pointless’

Generative AI is ‘a bit like teenage snogging – everyone’s doing it although no one really knows how’

July 1, 2024
NSW Police cordon tape wrapped around a light pole. This image was taken at the corner of Oxford Street and Hollywood Avenue, Bondi Junction
Source: iStock/SCM Jeans

Australia’s oldest university is contemplating banning academics from prohibiting the use of artificial intelligence in most assessed work, a forum has heard.

Danny Liu, professor in educational innovation at the University of Sydney, said trying to keep AI out of unsupervised assessment tasks was pointless because its use was ubiquitous.

“It’s this kind of secret thing that happens behind the scenes,” he said. “It’s a bit like teenage snogging. Everyone’s doing it [although] no one really knows how.”

That includes academics, Professor Liu said. “We’re ashamed to admit that sometimes we use AI in our work. We need to break down that barrier. This is not a shameful thing. This is just a tool. We all need to kind of get on with it. We need to bring this teenage story out in the open and just embrace it.”

Addressing the New South Wales Higher Education Summit at UNSW Sydney, he said decisions around permissible AI use needed “to happen from the top down”. From next year, he said, Sydney will “most likely disallow academics to prohibit AI in assessments which are not secured”.

“We feel that’s the only way forward,” he said. “In unsecured assessment, there’s no point saying you can…use AI for x and y but not z. [It] will hopefully…encourage a lot of academics to come forward and say, ‘OK, I need to deal with this now’.”

Professor Liu said that while generative AI was new technology, the questions it raised around assessment were longstanding. “For centuries, we’ve just relied on product as a proxy for learning. The fact that somebody hands in a beautifully written essay does not itself prove that they wrote it.”

He said assessment had long relied on the “trust” academics developed in their students by spending time with them. The “supervision process” remained key in an age of AI, helping academics determine whether they could take their students’ work at face value.

“That requires universities to really rethink where the resources are going. What are we privileging in terms of the cost of assessment versus cost of learning?”

Alex Steele, UNSW’s director of AI strategy education, said universities generally could not expect to “be at the cutting edge” of the use of technologies such as generative AI. “But we can set our students up for life by teaching them to use it responsibly,” he told the forum. 

Professor Steele said UNSW was developing a generic ethical framework for AI use by students and staff. “But it means different things in different disciplines,” he said. “It’s not just a conversation in [each] school – it’s a national conversation in that discipline.”

Kelly Matthews, of the University of Queensland’s Institute for Teaching and Learning Innovation, said generative AI warranted “honest conversations” about its use. But she warned that trust in universities was low, and tolerance of AI could further risk their social licence.

“You can imagine…the view that higher education is being lazy [and] just letting students do everything,” Professor Matthews said. “Medical doctors are being taught by the chatbots, and they’re going to operate on your child.

“There’s something higher about higher education. It is not just so easy that we can be replaced by generative AI. We need our senior leaders [to show that] we’re not being irresponsible about it, either.”

john.ross@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored