Logo

Universities, AI and the common good

Higher education must find paths for meaningful engagement with artificial intelligence, to leverage its potential, explain the problems and mitigate the hazards, writes Rajani Naidoo

Rajani Naidoo's avatar
6 Sep 2023
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
Robot and human connecting over light globe

Created in partnership with

Created in partnership with

University of Bath logo

You may also like

Campus webinar: Artificial intelligence and academic integrity
AI and academic integrity webinar

How should higher education respond to the rise of artificial intelligence (AI)? Rather than focusing on the impact of AI on university education and research, my concern is broader. What are the wider responsibilities that emerge for universities in relation to the spread of AI into the fabric of society?

I believe that universities must embrace and leverage AI for the common good. This means tapping into its potential, educating on its dangers and co-constructing solutions to accelerate the benefits for society while ameliorating potential harms. 

How do we, as universities, do this?

First, we need to embrace the potential of AI in all areas of our activity. For example, the ability to personalise learning while maintaining community has huge potential for equity. The use of AI in research for the common good can also be profound. 

Just one example is scientists from McMaster and MIT using AI to discover a new antibiotic in a very short space of time to treat a deadly superbug. Universities can also draw on AI to develop innovations for people with disabilities. For example, a Swiss start-up, Biped, has developed an AI co-pilot for people who are visually impaired; it films the surroundings with 3D cameras and warns the wearer with immersive sounds transmitted through bone-conduction earphones. 

Second, we need to dispel the narrow claims of inevitability – which at its extreme is that AI will automatically benefit (or destroy) humanity. Universities can work together to dispel the myth of AI as a disembodied entity, floating above us, engaging in techno-computations and orchestrating grand designs across our planet. Instead, we need to situate AI in its social, political and economic context and foster the understanding that AI is shaped by relations of power. We need to disseminate research on who owns, controls and gains from AI – ranging from those engaged in platform cooperatism for shared learning and democratic enterprise to powerful global corporations where the profit motive reigns. 

At the same time, we need to disseminate an understanding of how AI can itself reshape the socio, political and economic contexts from which it originates. Part of the issue here is that some systems have become so complex, multilayered and non-linear that even the designers of the programme do not understand how an output was arrived at. This lack of transparency makes it even harder to evaluate potential harms and, indeed, to exert human control.

We can call for corporates and governments to be more transparent about algorithms rather than hiding behind competitive advantage. 

AI and algorithmic literacy 

While only a small proportion of students (and citizens) might wish to become AI developers, universities can support all people in achieving AI and algorithmic literacy to understand both the technological principles upon which AI works and its impact on human beings, particularly in relation to how it might be steered for the common good. This will help challenge the perception of AI as an all-powerful oracle through which we accept its decision-making with blind faith in technology and big data. Instead, universities need to strengthen research and influence policymakers on the dangers of decisions based on algorithms that reproduce existing discrimination in society, reinforcing the exclusion of people who are already disadvantaged. 

The ethics of AI need to be embedded for all students – and not merely for students undertaking courses on AI. Such courses need to be taught by lessening the distance between coders and the impact of their work on real people, both near and far. Universities also need to address algorithmic discrimination by investing in, and ensuring the success of, a diverse student body in AI fields of study. This would bring together a variety of life experiences to recognise and correct for discrimination. 

Universities’ role in safeguarding democracy

There is a concern that AI can negatively affect democracy and reinforce political divisions because algorithms steer people into echo chambers of repeated political content. There is also the risk of fabrication of information and the erosion of democratic rights by certain types of surveillance systems. This suggests that there is an urgent need to coordinate actions for safeguarding democracy and democratic values, thus preparing citizens to live productively with AI.

To do this, universities need to resist contemporary pressures to function purely as employability machines or as levers for the economy. These are important functions of the university, but they must sit side by side with the role of the university as a critic and conscience of society with responsibility for calling truth to power.

What does it mean to be human? 

Finally, AI raises important questions for us on what it is to be human. The dominance of AI reduces the totality of human beings as living, thinking, feeling, complicated individuals into data points to become inputs for the machines.

At the same time, AI is being humanised. When AI systems get things wrong, we are not told that there is a computational problem or a glitch in the system, but that the machine is “hallucinating”. It gives us this sense of a quirky, edgy, trippy being. 

More worrying for me is backpropagation – that is when the machine-learning algorithm, a deep neural network, turns back on itself from its output to change its first inputs or its initial parameters to make its predictions more accurate. There are emerging claims that it has “learned” or that it has consciousness. We are thus left with big questions for universities to grapple with: what is thinking? What is learning? What is consciousness? And what does it mean to be human? 

In conclusion, universities, as sites for interdisciplinary collaboration, with a relative freedom from direct political influence, have the potential to lead on such questions and to contribute to driving AI for the common good. This means bringing together multidisciplinary teams comprising machine-learning scientists, social scientists, philosophers and legal scholars. This also means resisting efforts to downgrade research and teaching in the humanities and social sciences – as we cannot undertake this work without these disciplines.

Most importantly, we need to find ways for more meaningful engagement with the public by working with journalists, artists and actors to explain what the problems are, how they arise and what we can do to ameliorate the potential hazardous effects. In these ways, universities can intervene and help to shape the future of AI.

Rajani Naidoo is vice-president (community and inclusion), Unesco chair in higher education management and co-director of the International Centre for Higher Education Management (ICHEM) at the University of Bath.

This article is an edited version of a keynote address given at the 2023 Biannual Conference at the Center for International Higher Education at Boston College and the 2023 ICHEM conference in June.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site