Addressing evolving challenges to research security in the age of AI
Clearer government regulation could support universities in addressing risks to research security and compliance presented by AI
As technology continues to reshape research practices, ensuring research security has become increasingly complex, and several high-profile security incidents have also put this topic in the public eye. From data security and regulatory changes to ethical dilemmas and concerns about AI, universities must navigate new and fast-evolving challenges.
From data security and regulatory changes to ethical dilemmas and concerns about AI, universities must navigate new and fast-evolving challenges. During a THE webinar, held in partnership with Digital Science, panellists from industry and academia came together to discuss these growing challenges and related issues.
“We have a polluted information river,” said Leslie McIntosh, vice-president of research integrity at Digital Science. “It’s difficult to understand what we should trust, what is real and what we shouldn’t trust. With the opening up of research, we have been challenged with understanding the scaffolding of trust that goes around it.”
The rise of AI is, of course, part of this information pollution. The technology’s ability to fabricate content is undermining institutions’ ability to recognise fact from fiction.
“Over time, the risks associated with research security have become more dynamic and complex,” added Shannon O’Reilly, product sales team lead at Digital Science. “Universities have to increasingly take proactive measures to assess institutional risks, verify disclosures and review researcher networks. For many, it’s a new and challenging landscape to navigate and one that is rapidly changing. The complexity of these issues, combined with the vast networks of research activity, means institutions can’t operate on their own.”
This is where data and analytical solutions providers such as Digital Science come in. Digital Science has created Dimensions, a linked research database with a mission to provide users with the right data tools to empower them to make their own discoveries and analyses when it comes to research security.
“As an educational institution, how can we teach students to do secure research during their studies and beyond as computer scientists and technology developers?” asked Eduardo Alonso, director of the Artificial Intelligence Research Centre at City St. George’s, University of London. “Norms and standards provide guidelines but they are not enough. There is no way around the developers themselves being ethically responsible and accountable.”
“When we talk about AI, we are not necessarily just talking about the use of AI as a tool,” said Chaitali Desai, head of research compliance at the University of Bristol. “We also talk about it in the context of the legislation that we manage. My team looks at the regulatory obligations arising from export control, the National Security and Investment Act and trusted research principles.”
Academics may not necessarily view their research as a risk but AI could be used to repurpose it in unforeseen ways. “There are hostile nations that governments and regulators will tell us are using AI and the work that we do as a springboard to develop their own work at greater speed,” said Desai.
“AI poses both opportunities and challenges for research,” said Martha Wallace, director of research security at the University of Calgary. “Beyond the potential alarming military applications, there are also many potential benefits of AI. It’s an exciting time for AI research and the governments building policies around it.”
To adopt clear standards for using AI in a sensible way, further government intervention may be necessary. “I don’t think we need more law,” said Desai. “The law simply needs to be clearer. Whether you’re an academic or a lawyer, interpreting legislation allows you to understand what governments are driving at and what they want us to do to be compliant with these regulations.”
The panel:
- Eduardo Alonso, director, Artificial Intelligence Research Centre, City St. George’s, University of London
- Chaitali Desai, head of research compliance, University of Bristol
- Alistair Lawrence, head of branded content, Times Higher Education (chair)
- Leslie McIntosh, vice president of research integrity, Digital Science
- Shannon O’Reilly, product sales team lead, Digital Science
- Martha Wallace, director of research security, University of Calgary
Watch the webinar on demand above or on the THE Connect YouTube channel.
Find out more about Digital Science.