Internet pioneer Vinton Cerf has urged higher education leaders to modify not just their assessment methods but their overall teaching and research approaches as artificial intelligence advances.
Addressing the Times Higher Education Digital Universities Week US conference, Dr Cerf suggested that academia had a particular duty to respond as both leading experts and the general public express rising levels of anxiety about the future direction of AI.
“Higher education has an obligation to explain what those problems are, how they arise, and what we can do to ameliorate the potential hazardous effects,” said Dr Cerf, now the chief internet evangelist at Google.
Dr Cerf spoke to the THE conference just days after another prominent Google scientist, AI pioneer Geoffrey Hinton, left his job overseeing the Google Research team in Toronto, warning that AI-based machines will become more intelligent than humans and expressing alarm at the rate it is happening. Dr Hinton, an emeritus professor of computer science at the University of Toronto, has said that he left Google out of a desire to speak freely about the danger he sees ahead.
Having discussed the matter with Dr Hinton and understood his concerns, Dr Cerf told the conference that he did not feel that Dr Hinton had any assessment to make that he could not have voiced publicly as a Google executive. The danger of intentional or unintentional abuses involving AI are so great, however, that universities must do more to teach about them, Dr Cerf said.
“Higher ed has a responsibility, in my view, to articulate exactly this, and to argue that we need to teach people how to use these tools in a way that’s safe, for themselves and others,” he told the event, hosted by the Illinois Institute of Technology.
Dr Cerf, as a doctoral student at the University of California, Los Angeles and as an assistant professor at Stanford University in the 1970s, was part of the team that connected the initial nodes of what became the internet.
Along with the humanity-scale dangers that Dr Hinton has been highlighting, Dr Cerf and other experts at the Digital Universities conference outlined a number of major academic-specific concerns stemming from AI, including students overtly cheating or otherwise failing to learn as well as they have in the past.
One speaker was Rohit Prasad, a senior vice-president and head scientist for Amazon Alexa. He had earned a master’s degree in electrical engineering at Illinois Tech, and he recalled the negative effects on learning that he witnessed there after the introduction of handheld calculators.
“It was quite appalling,” Mr Prasad said. Universities need to remember the importance of teaching students – when they pursue computer-aided solutions – to also understand what outcomes are in the range of plausible, he said, so that they do not “lose that muscle of estimation”.
The threat of actual cheating also is demanding a great deal of attention across higher education. In one sign of its magnitude, the US company Chegg, which offers students online academic assistance – including with homework – lost nearly half its market value in one day this past week after admitting that the AI tool ChatGPT was hurting its business.
The dean of law at Illinois Tech, Anita Krug, told the THE conference that her school had just revised its student code of conduct to make clear that AI-type tools should not be used without the express permission of the instructor. But Laura Pedrick, the executive director of UWM Online, at the University of Wisconsin-Milwaukee, said her team had found no reason to make changes relating to specific technologies. “Plagiarism is plagiarism,” she said.