‘Academic solipsism’ risk from chatbot use, warns university head

Growing use of chatbots within higher education is likely to foster unoriginal and ‘solipsistic’ thought, says IE president

March 15, 2024
AI chatbots
Source: iStock

Students who turn to chatbots for advice and inspiration while learning are at greater risk of producing bland and unimaginative arguments in their work, a university leader has warned.

Warning about the rise of “academic solipsism” from the use of generative artificial intelligence in higher education, Santiago Iñiguez, president of Spain’s IE University, said answers generated by large language models often seemed to parrot consensus views or even reinforce the user’s own belief system, rather than challenge conventional wisdom.

“They usually try to please you with their answers,” explained Professor Iñiguez, who was speaking to Times Higher Education at IE’s Reinventing Higher Education conference, held at the University of Miami. “And if the user doesn’t get the answer they want, the AI will adapt to the user.

“I don’t deny that AI, with its access to infinite sources of information, will bring advantages to the learning process – to do so would be stupid. However, there is something still missing from generative AI, namely an ability to think outside of the box or provide that unexpected insight,” said Professor Iñiguez de Onzoño, a University of Oxford-educated philosopher and strategic management expert.

ADVERTISEMENT

Campus resource collection: AI transformers like ChatGPT are here, so what next?


His comments come as universities across the world adapt their teaching, learning and assessment guidelines in response to the emergence of ChatGPT and other generative AI tools. Guidelines issued by the UK’s Russell Group of research-intensive universities last year state that “staff should be equipped to support students to use AI tools effectively and appropriately in their learning experience”, recommending their “ethical use”.

Many universities, including the University of Cambridge, have said they will allow students to use AI as long as it is not used for coursework or exams, while others go further by permitting its use when properly credited.

ADVERTISEMENT

Anecdotally, lecturers say many students have described using ChatGPT for first drafts or when they “get stuck” in an essay, but that reliance on chatbots meant undergraduates would often lapse into hackneyed and well-rehearsed lines of thought rather than attempting more original arguments, explained Professor Iñiguez.

“This is why the classroom remains so important – AI can generate ideas, but you’ll never truly know the impact of these ideas until you encounter a human being. Did they change your mind? Did they influence the course of a conversation? Did it make you laugh or change anyone’s mind? That is something that only the social element of in-person teaching can provide,” said Professor Iñiguez.

While he said he believed AI could “complement” teaching in higher education, Professor Iñiguez doubted whether teaching could be satisfactorily replaced by AI, despite claims that AI-guided peer-to-peer learning could provide a cheap and affordable model to educate millions of students.

“Only the university teacher can see whether a class has moved the students – that is perhaps the most amazing thing that we do as educators,” he said.

ADVERTISEMENT

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (1)

It would be impossible to locate and demarcate any more "academic solipsism"--whatever that might be--then has existed for decades. How could we tell?

Sponsored

ADVERTISEMENT