Artificial intelligence will force academics to think harder about the skills they want to reward, with ethics a necessarily bigger part of the mix, according to a Massachusetts Institute of Technology expert.
Shigeru Miyagawa, an MIT professor of linguistics, told Times Higher Education’s Teaching Excellence Summit that grading and assessment were among the elements of academia most susceptible to AI-driven overhaul.
Professor Miyagawa was part of the team that created OpenCourseWare, an online repository of virtually all MIT’s course content. Now, as MIT’s senior associate dean for open learning, he is working to integrate AI into learning systems.
In a keynote address to the conference at Western University and in a separate interview, Professor Miyagawa repeatedly emphasised his attention to ethical implications of his work with AI.
Professor Miyagawa joined others at the conference in predicting that traditional liberal arts skills, such as creating ideas and communicating them, will only grow more valuable in a heavily computer-aided world.
He also reported encouraging progress in developing automated systems that could support such training. He described AI systems that make online courses more efficient by helping students identify areas where they struggle and by repeating and reinforcing sections as needed.
Professor Miyagawa said AI-assisted systems for assessing the quality of written essays – a critical component of teaching communications – were already proving so capable that they were challenging human teachers to recognise their own biases in grading, and to attach greater precision to the skills they most want to reward.
“It’s not too far-fetched to think that, down the road, through this type of application, you could have students learn some communications skills on a massive scale,” he said.
In particular, Professor Miyagawa said ethics would need to be taught alongside technology. In defining the ethical principles, a key question would be “can we teach enough of [ethics] so that we don’t have a doomsday with AI and AI will always be used to serve us instead of it?” he said.
Professor Miyagawa also acknowledged the broader public fear that computers have the potential to make things worse, for education and beyond. Humans have largely managed, over time, to identify and correct their own mistakes, he said. A future in which computers vastly multiply the speed and effect of any mistakes could prove overwhelming to humans, he admitted.
“That’s the scary part,” Professor Miyagawa said, because most scientists, even at MIT, do not yet fully understand how advanced computers learn from their environments.
But if scientists could fully guide AI mechanisms, Professor Miyagawa said, that might in essence put programmers in the position of trying to orchestrate society. “I don’t think we can do it, and yet this is what we’re faced with,” he said.
Even areas where AI’s benefits seem clearest, such as medical applications that predict diseases in individuals, raise the risk of asking researchers to decide which human conditions should be fixed, he said.
“My plan is to educate a new generation of young people who will have intuition behind computational thinking, so they’ll have some notion of what’s going on when something happens,” Professor Miyagawa said. “But at the same time, these young people will be taught to understand that nothing is going to be perfect, because human nature is not perfect.”