Intel v-p: ‘research needed to open machine learning black box’

Universities must address ethical questions and bias in AI and ensure students are ‘better than ChatGPT’, says Wei Li

April 20, 2023

Academic research is needed to give an “ethical foundation” to machine learning, according to the head of the AI engineering team at Intel, one of the world’s largest semiconductor chip manufacturers.

Speaking at the Digital Universities UK event, held by Times Higher Education in partnership with the University of Leeds, Wei Li, vice-president and general manager of artificial intelligence and analytics at Intel, said: “Machine learning today is a black box. You get what you get, and you don’t really know why.”

He added: “In some applications [such as healthcare], you will want to know why that system gave you that answer.”

Academic research can help to expose fundamental ethical issues in AI, such as in-built bias around gender, Dr Li told the event.

ADVERTISEMENT

“There are a lot of unknowns and open questions in machine learning today, and it really demands fundamental research that universities can do and industry can’t,” he said.

Dr Li said his team at Intel was working to develop technology that would help to make fair and inclusive AI systems.

ADVERTISEMENT

But speaking with THE, he warned that industry was more focused on building AI systems than with addressing the ethical questions they provoke. “These problems have roots in how the overall machine learning works,” he said. “These things – I hope people in academia can do something deeper than what we’re doing.”

Despite a recent open letter signed by AI experts and industry executives, including Elon Musk, calling for a pause in the development of AI until “we are confident that their effects will be positive and their risks will be manageable”, Dr Li does not expect AI advancements to slow.

“It’s not realistic,” he said. “It’s a risk in terms of commercialisation, and it’s a race to be the fastest and the first in the industry. That’s enough motivation for people to go for these things.”

So with the rapid advancement in AI systems, will academic research into machine learning models quickly become obsolete? No, said Dr Li, who argued that universities could influence the way that future systems are built. “I don’t expect them [researchers] to dig into ChatGPT and explain ChatGPT – that’s impossible to do given the state of art we have today. But if people have a better foundation for machine learning, then maybe the next generation can be a safer and less biased model.”

ADVERTISEMENT

When it comes to university teaching, Dr Li said, “higher education has a challenge and an opportunity to better train the next generation of students in the new AI environment”. Institutions should teach students to be “more than just simply a messenger for ChatGPT”, he added.

“The products of a university are the students you’re producing. If they’re not better than ChatGPT, then why do we bother to send them to university?” Dr Li asked.

sara.custer@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

The AI chatbot may soon kill the undergraduate essay, but its transformation of research could be equally seismic. Jack Grove examines how ChatGPT is already disrupting scholarly practices and where the technology may eventually take researchers – for good or ill

16 March

Sponsored

ADVERTISEMENT