Making machines smarter

20 Sep 2024
copy
  • Top of page
  • Main text
  • More on this topic
copy
NTU Research

From voice-activated home assistants to personalised recommendations on streaming services, artificial intelligence (AI) has become a real, ever present force powering our everyday experiences.

By analysing colossal datasets to identify patterns and predict outcomes with lightning speed, AI platforms can help us save money, work smarter, make better decisions, reduce the toll of human error and more. 

Researchers at NTU are exploring and expanding the bleeding edge of these transformative technologies and finding ways that AI can serve us better.

DEMYSTIFYING DIGITAL DIALOGUES

Given the sheer volume of information available on the Internet, it is impossible for human minds to distil this deluge of data into meaningful, actionable insights. This creates a challenge for sentiment analysis, or the art of decoding the collective human mood from the data that people post online. 

AI’s ability to “read” data and gauge sentiments on a topic, whether positive, negative or neutral, is highly valuable. For instance, automated sentiment analysis platforms continuously assess and categorise content from social media and news outlets to weigh up public opinions on certain issues so that political candidates can tailor their campaigns to potential voters. Similarly, AI can help companies gain insights on customer preferences to improve their products and services.

Still, AI continues to struggle with natural language processing, as it involves an understanding of the world and social norms, which humans learn through experience.

SenticNet is a platform that uses a type of AI called machine learning to recognise emotional undertones from text. Developed by Assoc Prof Erik Cambria from NTU’s School of Computer Science and Engineering (SCSE), SenticNet addresses some of the challenges faced by AI when making sense of human languages by integrating human modes of learning with traditional learning approaches used by machines.

When tested, SenticNet outperformed other models. Unlike conventional sentiment analysis models that are often black boxes, the processes by which the platform derives its results are transparent and its results are reproducible and reliable. 

Related research by Assoc Prof Lee Chei Sian from NTU’s Wee Kim Wee School of Communication and Information (WKWSCI) is user-focused, aiming to understand how human experts can improve the accuracy of AI-driven sentiment analysis, and to create frameworks that are more precise and take context into account.

Assoc Prof Lee says that AI can help to bridge the knowledge divide and enable more inclusive knowledge sharing within online communities. Using the example of AI-powered translation, Assoc Prof Lee says, “AI tools can break down language barriers, bring diverse communities closer together and foster greater understanding and connection.”

Meanwhile, a method developed by Assoc Prof Sun Aixin at SCSE makes video content searchable by matching keywords with on-screen images, allowing us to better engage with video content for surveillance, education, entertainment and more. Traditionally, this is done using a computer vision technique but one drawback is its reduced efficacy with long videos.

To overcome this, Assoc Prof Sun and his colleagues developed a method that treats a video as a text passage so that specific moments in the clip can be searched, and a long video can be split into shorter clips for searching.

DISCERNING THE TRUTH

Online content can be manipulated in a way to fool or scam even the most astute audiences. For instance, facial manipulation technologies can create hyper-realistic faces that may be used nefariously to mislead people.

Work fronted by Asst Prof Liu Ziwei at SCSE has resulted in an algorithm called Detecting Sequential DeepFake Manipulation, or Seq-DeepFake, which flags doctored images by recognising digital fingerprints left by facial manipulation. Seq-DeepFake is a powerful tool that can help everyone from government organisations to individual users verify the authenticity of visual information in the digital age.

Similarly, Assoc Prof Tan Chee Wei at SCSE, in collaboration with Prof Josip Car from NTU’s Lee Kong Chian School of Medicine is studying how AI can be used to discern truths from rapidly growing and changing information landscapes, such as pandemics.

According to Assoc Prof Tan, natural language processing algorithms can independently analyse large volumes of text and identify biases, inaccuracies and unsupported health information circulating online to ensure a safe and focused online environment for community members to interact.

MAKING CENTS OF AI IN BUSINESS

Another application of AI is to facilitate effective trading in the finance ecosystem. In recent years, the industry has seen a shift with traders increasingly leaning on machine learning models that can assist with intraday trading by churning large, multi-dimensional datasets – including those from current affairs, social media and historical investment behaviours – to predict future pricing dynamics.

New platforms such as those developed by SCSE’s Prof An Bo and other NTU researchers give investors an edge by expediting decision-making and minimising risk, which can translate to higher profits.

The platform by Prof An and his team uses reinforcement learning, a type of machine learning that makes decisions by dynamically interacting with a data environment and receiving rewards or penalties based on decision outcomes. In experiments using real-world market data, it was shown to outperform current state-of-the-art machine learning platforms for making healthy investment decisions.

SMART LEARNING WITH BRAINY BOTS

In 21st century classrooms, AI-based systems can enhance students’ learning through personalised education. Working at the forefront of educational technology, Assoc Prof Chen Wenli from the National Institute of Education (NIE) is confident that integrated AI learning tools will individualise teaching and learning.

“AI can capture information on students’ learning, especially in digital learning environments,” explains Assoc Prof Chen, adding that automated real-time analytics, assessments and feedback make learning more productive and engaging. In addition, she says, AI can improve resource accessibility for students with learning differences and help optimise schools’ resource allocations to better meet student needs.

There is, however, the potential for cheating by students using AI tools. 

Assoc Prof Hannah Yee-Fen Lim, an international legal expert on cutting-edge technology law, including AI, from NTU’s Nanyang Business School (NBS), says that when educators realise that generative AI functions by using iterative search, copy and paste commands, “it is very easy to set assessment tasks in most disciplines whereby generative AI programmes such as ChatGPT will perform badly, and thus would render them useless in helping students cheat in assignments”.

DIGITAL TWINS STEER TOWARDS A GREENER FUTURE

In environmental sustainability, AI is bolstering global efforts to go green. 

To minimise the energy usage of data centres – facilities that house computer systems – researchers led by SCSE’s Prof Wen Yonggang, who is also NTU’s Associate Vice President (Capability Building), developed an integrated industrial AI solution called DCWiz.

One of its technologies is Reducio, an AI tool that predicts temperatures inside data centres in real time to optimise cooling system operation, ultimately lowering energy needs.

Another tool under DCWiz is DeepEE, which uses machine learning to optimise scheduled tasks such as temperature regulation in data centres, which saves up to 15% of energy compared to existing energy-saving methods. 

Such AI technologies can be used to improve the efficiency and accuracy of digital twins – virtual representations of physical systems that help predict how real-world facilities behave. Prof Adrian Law of NTU’s School of Civil and Environmental Engineering and colleagues at NTU’s Nanyang Environment and Water Research Institute have advanced digital-twin techniques to manage water treatment facilities.

The researchers developed a data processing framework with AI modelling that can learn from real-time data to better mimic dynamic conditions in the water treatment system. Simulations have shown that the framework can stabilise the quality of the treated wastewater despite large fluctuations in the influent water quality, while slashing operational costs and energy usage. The framework is currently being tested in pilot studies in prototype industrial wastewater facilities.

MAKING AI HUMAN-CENTRED

While AI can help improve the way we do things, people’s acceptance of AI and its regulation are crucial for the technology to be integrated into society.

Researchers from France and Singapore, including from the French National Centre for Scientific Research (CNRS) and NTU, are working on a hybrid AI programme combining machine intelligence with human intelligence to develop solutions that support the creation of smart cities.

Beyond technical solutions, one of the thrusts of the S$56 million (US$41 million) DesCartes programme looks at AI that is centred on human needs.

The thrust seeks to understand the Singapore public’s expectations, understanding and perceptions of AI in smart cities. It also considers issues that can incentivise organisations to develop trustworthy AI.

“For us to have the right technology and policies for AI, people need to be front and centre in the process of developing AI applications,” explains Prof Shirley Ho, NTU’s Associate Vice President (Humanities, Social Sciences and Research Communication), who is Singapore’s lead principal investigator for DesCartes’ thrust on human-centred AI research.

“The communication and philosophical questions we ask in DesCartes are vital to understanding public perceptions and acceptance of new hybrid AI technologies in Singapore,” she adds.

For instance, Prof Ho is investigating public perceptions, beliefs and acceptance of autonomous passenger drones used for transport.

This will help system developers factor in people’s concerns and views in the drones’ development, as well as guide how public messages on the drones can be conveyed to boost people’s confidence in the new technology.

NBS’s Assoc Prof Lim is analysing and proposing appropriate laws, regulations, best practices and ethics for the new AI technologies that scientists are building in DesCartes.

This ranges from taxi drones – that involve data protection, intellectual property and public liability laws – to AI innovations on sustainability involving environmental and intellectual property laws. She presented on some of these issues at France’s Sorbonne University in May 2023.

In a separate research project funded by Singapore’s National Research Foundation, called Future Health Technologies, Assoc Prof Lim is investigating data governance for healthcare AI, including AI robotics.

TRUST AND SHIFTING PERCEPTIONS

Despite AI’s benefits, experts such as WKWSCI’s Asst Prof Andrew Prahl point out that even the most robust commercial AI systems will not be perfect.

Asst Prof Prahl, who researches perceptions of AI in DesCartes, warns that organisations need to be prepared to respond appropriately should their AI platforms fail.

As part of their research, Asst Prof Prahl and his colleagues created a crisis response framework for organisations to help minimise the potential impact of AI slip-ups on company reputation and public perception.

Part of the work he is doing in DesCartes provides the audience research that will underpin any crisis response.

Experts have also raised concerns that since AI reflects human biases, it does not always operate objectively.

Dr Melvin Chen from NTU’s School of Humanities, a co-investigator in DesCartes’ human-centred AI research, says that these enmeshed biases contribute to people being wary of AI. “The encoding and reinforcement of systematic biases by AI systems make distrust the appropriate default attitude.”

According to Dr Chen, trustworthiness is more than a matter of competence, predictive accuracy and reliability. It also has to do with the ability to recognise and respond to the fact that one is being counted on by others. Technologists will do well to refrain from conflating reliability and trustworthiness, he says.

Due to the limitations of current language models, however, we cannot place our full trust in AI just yet. “Regardless of how successful programmes are at natural language tasks, they do not understand natural language in the way that human beings do,” explains Dr Chen.

UNDERSTANDING AI TO REGULATE IT

Even as perceptions shift, NBS’s Assoc Prof Lim cautions that machines will never be smart but AI has reached a point where machines such as autonomous vehicles (AVs) can be passed off as being smart.

Currently, lawmakers globally are struggling with understanding how AI functions, and hence are struggling to govern this space appropriately, says Assoc Prof Lim, who, given her double formal qualifications in computer science and law, in 2018 published a pioneering interdisciplinary book on AVs. The book analyses how AVs ought to be regulated when the science and law are both taken into consideration.

In the case of AVs, they are high risk and can cause great harm, including to innocent pedestrians, as the technology is not yet mature. However, Assoc Prof Lim explains that “when the limitations of AI are understood by regulators, they will then be cognisant of where regulation is most urgently needed and in what respects, even at the trial stages of AVs”.

The article appeared first in NTU's research and innovation magazine Pushing Frontiers(issue #22, August 2023).