The robot vice-chancellor, an argument for wisdom and compassion

While advancements in artificial intelligence could streamline the daily responsibilities of a university leader, we should leave the big decisions to a human being, says Vijaya Nath 

June 27, 2018
Giant robot
Source: Getty

The recent debut of IBM’s Project Debater demonstrated the ability of a robot to successfully debate with a human for the first time. The pioneering robot draws on knowledge gained from hundreds of millions of journal articles to make logical arguments as well as analyse and rebut the responses of its rivals.

“Project Debater could be the ultimate fact-based sounding board without the bias that often comes from humans,” said Arvind Krishna, director of IBM Research.

There is no doubting the advantage of processing and analysing vast quantities of data and information at speed when needing to make quick decisions. The logical conclusion is that this technology could be applied to the workplace, particularly in large organisations such as universities where senior leaders have increasing demands on their time. A good example of this in practice is Deakin University in Australia which has partnered with IBM to use AI to answer questions from students about life on campus. How this technology could be applied to senior leadership is another question.

As highlighted by Krishna, a robot also has the advantage of making informed considerations without the distraction of human emotion or experience. But does a decision made without the influence of emotion or compassion lead to the best outcomes for an organisation, its staff and, in the case of universities, its students?

ADVERTISEMENT

Emotional intelligence is a crucial component of successful leadership. Compassionate leaders have been shown to increase the motivation and productivity of a workforce by creating a positive environment. Furthermore, showing genuine concern as a leader can increase trust and contentment in the workplace.

One aspect of emotional intelligence is empathy and having the ability to relate to the thoughts, emotions and experiences of others by putting yourself in their shoes. This capacity to understand and support others has thus far eluded even the most sophisticated humanoid robots and it is a quality that differentiates great leaders from good leaders. In the 1990s, the notion of emotional intelligence being a better determinant of success in life (and leadership)than IQ was made popular through the work of Daniel Goleman, who argued that “without emotional intelligence a person can have first-class training, an incisive mind, and an endless supply of good ideas, but they still won’t be a great leader”.

ADVERTISEMENT

Greater attention needs to be paid to emotional intelligence and self-awareness as AI is used more in the workplace to counterbalance the objectivity of robotic intervention. In a practical sense this means recognising that certain tasks and decision-making cannot be carried out without drawing on past human experiences and interactions and a genuine empathy for others. Ultimately, you can't have wisdom in leadership without compassion.

AI is also restricted when it comes to creativity and imagination. While a robot can objectively look at facts, it does not have the ability to “think outside the box” unassisted, a vital skill for allowing creativity, which gives way to innovation.

Despite these limitations, there are many tasks robots could efficiently pick up for vice-chancellors and other senior leaders, including email and diary management, managing social media streams and designing more effective systems and processes using decision tree AI technology. Using technology in this way can free humans to use their time more productively. AI also has the potential to facilitate increased team diversity by overcoming unconscious bias in recruitment and promotion processes (as long as the programme used to create the decision tree is itself set up to catch bias).

Valuable interventions of these kinds have proved beneficial in industry and, while many sectors have embraced recent advancements in AI, the pace of change has been slow for higher education. This is, of course, ironic given that much of the technology is being developed in universities.

ADVERTISEMENT

While this technology has been shown to increase the efficiency and success of an organisation, the limits to the capabilities of AI should not be forgotten and any decision-making should be made with the benefits of the wealth of information accessible to robots alongside the experience, intuition and compassion of a human.

We should certainly be embracing the exciting growth of AI and the opportunities that it brings to higher education and, while empathy can’t currently be replicated by robots, they can certainly support and enhance human decision-making. We need both AI and emotional intelligence to flourish in the future but, for now, the robot vice-chancellor should not make the shortlist!

Vijaya Nath, programme director and associate at Advance HE, is presenting “AI-The Robot Vice-Chancellor” at Advance HE’s Leadership Summit: Wisdom, Grit and Compassion in London on 29 June.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (2)

Yes, the IBM Research team behind Project Debater agrees with you completely. AI will not make the decision, as in the case of Project Debater, it will provide humans with evidence in a Pro/Con format to make an fact-based, more informed decisions. You are free to always go with your "gut instinct", but at least you will have all the facts at your fingertips.
AI management in universities: "Done with the best of intentions, for all the right reasons, what could possibly go wrong?" I don't know, but I do wonder if we would even notice such a change in the new administrative university? I certainly know of some middle managers that appear indistinguishable from unempathetic robots. More seriously, one way of thinking about it would be to gather up all the evidence of where the interference of AI in things previously done by humans has led to a degradation of service. Professor Mark Griffiths and I have one example (and I know of other very different examples) . Our example is of how Google's new Rank Brain deep learning AI has greatly diminished the usefulness of Google Books as an academic research resource: Reference: Mike Sutton and Mark D. Griffiths (2018) "Using Date Specific Searches on Google Books to Disconfirm Prior Origination Knowledge Claims for Particular Terms, Words, and Names" Soc. Sci. 2018, 7(4), 66

Sponsored

ADVERTISEMENT