Five ways AI has already changed higher education

Powerful new technologies are transforming the way universities operate, even before the impact of ChatGPT is truly felt

May 15, 2023
Classical statue with purple network lines to illustrate artificial intelligence transforming the way universities operate
Source: Getty

Since the emergence of ChatGPT – and its successor GPT-4 – higher education has gone into overdrive in analysing the potential opportunities and risks of such powerful new technologies. 

While much of the global conversation has thus far focused on the future impact of AI, many universities are already grappling with these issues in the here and now.

Here, THE outlines five ways AI has already had a transformative effect on the sector – from chatbots providing student support to AI-generated reading lists. 

Forcing education companies to re-evaluate their business models

The chief executive of education technology company Chegg, Dan Rosensweig, has blamed ChatGPT for a decline in the number of new sign-ups for its textbook and coursework help services, saying that – as midterms and finals approached – many would-be customers had instead turned to AI for help.

ADVERTISEMENT

THE Campus resource: A simple hack to ChatGPT-proof assignments using Google Drive


This was said to be a “harbinger” for how the emergence of generative AI will disrupt education businesses, with companies rushing to future-proof their offerings.

Turnitin has expedited the launch of an AI detector, while Duolingo has utilised GPT-4 to help learners test their language skills.

ADVERTISEMENT

At the same time a raft of new start-ups have launched, offering everything from personal tutoring chatbots to their own AI detectors, with varying degrees of accuracy.

Mike Sharples, emeritus professor at the Open University’s Institute of Educational Technology, said it was those bigger companies that were incorporating AI into existing, well-established products that were having the most success.

Others, he warned, risked ended up like the “Kodak of the late 1990s” – unable to adapt quickly or effectively enough to cope in a competitive market.

“I think there will be many more companies in the education sphere, and institutions too – particularly distance-learning ones – that are going to find it really difficult to survive,” Professor Sharples said.

“That’s because students might think that AI can do what they do better – whether it can or not remains to be seen.”

The likes of ChatGPT can easily produce a textbook or course material, said Rose Luckin, professor of learner-centred design at the UCL Knowledge Lab.

“Of course, it needs quality control because there will be errors; but quality control is a lot cheaper than producing the stuff in the first place.

“There are lots of parts of the publishing and edtech sector that are affected. Companies need to recognise that and look at how the demands of students and the sector are changing, then consider what ChatGPT can’t do as well and work to fill those gaps.”

ADVERTISEMENT

Bots that help with teaching and student support

A key feature of ChatGPT is its ability to converse with a user, responding to questions and prompts in a way that is far more sophisticated than anything that has come before it.

Chatbots were already being used in student support, said Professor Luckin, but ChatGPT had brought the possibility of much better-quality interactions.

“They can provide personal support, well-being support, counselling; it gives students access to something 24/7 that they might not have otherwise. Obviously, it is not the full solution, but it can be a useful first port of call,” Professor Luckin said.

Professor Sharples agreed that the use of bots was likely to be extended after ChatGPT’s rise, although they had had a “mixed response” so far.

Some might baulk at the idea of turning to a computer when they have issues, but others “really like being able to chat to an AI bot in their own time, particularly if it gives them useful advice rather than just general reassurance”, he said.

On the teaching side, bots have also proven to be useful tutors. Stephen Coetzee and Astrid Schmulian, professors in the accounting department at the University of Pretoria, have just launched AE-Bot, which uses GPT-4 to help their students better understand accounting concepts.

Such “personalised and accessible” learning assistance was not possible otherwise in a class of more than 500, Professor Coetzee explained. He said he saw it as a way of “essentially giving each of our students a tutor in their pocket” in a sector where scarce resources and difficulties appointing more professors had been an issue for many years.

ADVERTISEMENT

Relieving pressure on admissions tutors

Just as the rise of AI has prompted a rethink of assessments because of the risks of cheating, the fact that students could, for example, use ChatGPT to write a college application essay is forcing innovation.

Ahead of the next cycle, institutions are looking at incorporating audio and video into applications, or at whether they could encourage students to submit social media posts instead of the traditional essay, according to Rick Clark, director of undergraduate admission at Georgia Tech University.

On the other side of the process, AI is helping to make life easier for staff who have to deal with upwards of 50,000 applications a year, he added. Slate – an enrolment platform used by some of the most competitive US universities – can pull together data from previous cycles to help predict the types of students most likely to be successful at the institution, Mr Clark said.

He said he was also talking to colleagues in the university’s school of computing about using AI to eliminate applicants that were very unlikely to be admitted in order to free up staff time to concentrate on the more viable candidates. 

While at first humans will still need to judge all applications alongside AI to ensure it is not biased against certain groups, eventually this could help to relieve some of the burnout among admissions tutors, Mr Clark said.

Georgia Tech was also looking into following others by investing in chatbots to replace humans in answering the basic questions potential applicants might have.

“Within the next year, I think, we will not have any humans doing that front-line work,” Mr Clark said. “If the bot can be accurate and helpful and accessible at all times of day, it is going to have a big impact on the work of admissions.”

Finding journal papers and analysing citations

ChatGPT’s tendency to “hallucinate” when asked to list significant scholarly papers in a discipline – inventing entirely fictitious but plausible-sounding papers, authors and institutions – has led to much mirth in academia. But AI tools have been guiding literature searches for several years, with new products looking likely to disrupt the field even further.

The days of relying simply on the university library catalogue or Google Scholar for scanning the forest of research papers are long gone. In around 2020, a number of new tools – Connected Papers, Inciteful and LitMaps – hit the market, each of which provides far more user-friendly visualisations of papers related to search terms.

The free-to-use ResearchRabbit is brilliant not just for identifying widely cited papers, but also for providing helpful graphs to show who is citing them, explained Mushtaq Bilal, a postdoctoral researcher at the University of Southern Denmark who tweets about how to use new AI tools available to academia. “Google Scholar will give a list of titles and some data. This [ResearchRabbit] will go through the bibliography and citations, then visualise who is citing who,” said Dr Bilal, who said he believed it was useful for exposing citation cliques in addition to landmark papers.

New products on the market include scite – a Brooklyn-based start-up billed as “ChatGPT for science” – which promises to give “reliable answers straight from the full text of research articles” in addition to identifying useful papers, analysis of citations and AI-led fact-checking services for references. 

Even ChatGPT’s flights of fancy regarding academic references can be fixed by limiting its searches to Google Scholar-listed items, researchers claim. “People have ended up chasing down these hallucinations, but I don’t think it’ll be a problem for long,” said Dr Bilal. “If OpenAI doesn’t fix this, somebody else is going to take care of this issue.”

Quality scanning papers

With an estimated three million papers across 30,000 journals published each year, it’s easy for researchers to miss the diamond in the rough. But some of science’s greatest breakthroughs have come from studies overlooked in their time, with some so-called “Sleeping Beauty” papers waiting as long as 100 years for due credit. Can AI fix this problem by scanning papers quickly and reliably to judge their quality?

In AI terms, this field is fairly ancient. Back in 2015, Semantic Scholar was launched by the Allen Institute for AI, with the natural language-processing tool providing one-line summaries of scientific literature. As of 2022, its corpus covers more than 200 million publications across all fields of science.

Recent advances are arguably more exciting. Asad Naveed, a postdoctoral fellow in trauma care at the University of Toronto, uses the AI-updated version of Bing to scan and grade published scientific studies. On Twitter, he explains how he instructed the Microsoft-owned search engine to assign scores to different papers based on criteria such as study design, helping to flag robust studies for further reading.

Bing also captured useful descriptive information and made inferences from information within tables, Dr Naveed said. “Bing AI reduced my screening time on this paper by 50 per cent – this technology is only going to grow,” he added of systematic review tools.

While close reading of studies will remain important, such technologies were useful, said the University of Southern Denmark’s Dr Bilal. They helped to resolve the problem, particularly acute for those like him from a humanities background, where “you can sometimes spend weeks or months reading a single book or paper”.

ADVERTISEMENT

The technology was “very useful for reading lots of papers to help you get a lie of the land”, he said. In many cases, such AI-generated summaries were more helpful to non-experts than abstracts, Dr Bilal added. “Abstracts make sense for those in the field, but they can use jargon that non-specialists don’t understand. With AI summaries, that’s often rewritten or links to jargon are provided.”

tom.williams@timeshighereducation.com

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

The AI chatbot may soon kill the undergraduate essay, but its transformation of research could be equally seismic. Jack Grove examines how ChatGPT is already disrupting scholarly practices and where the technology may eventually take researchers – for good or ill

16 March

Reader's comments (2)

This is an incredible piece of information. Thanks. Alven Toffler has said it all, the next generation of illiterates would be those who cannot unlearn, relearn or learn in this fast emerging area!
Thank you for citing many other tools in the article, hopefully these tools alongwith AI chatbots help us to identify information for further discoveries / innovations.

Sponsored

ADVERTISEMENT