University ethics panels are unfit to deal with the complex challenges arising from emerging fields in research on artificial intelligence, a study has claimed.
With AI research now accounting for more than 4 per cent of all research published globally, the scope and nature of research involving data science has massively expanded in recent years but ethical considerations still largely centre on protecting the privacy and anonymity of research subjects, or obtaining consent from in-person participants – reflecting traditional “biomedical, human-subject research practices which operate under a researcher-subject relationship”, explains a new report by the Ada Lovelace Institute, the Alan Turing Institute and the University of Exeter.
Novel types of research undertaken by data scientists, however, should be considered in terms of a more nuanced “researcher-data subject relationship”, argues the report, Looking Before we Leap: Expanding ethics review processes for AI and data science research, which highlights how the harmful uses of research may only become clear once funded studies are well under way.
It cites several high-profile examples of controversial and unethical AI research being approved by research ethics committees, such as facial recognition algorithms that claim to identify homosexuality or criminality, and chatbots that can spread disinformation.
To prevent this type of AI research going ahead, the report recommends that institutions and researchers “engage more reflexively with the broader societal impacts of their research, such as the potential environmental impacts of their research, or how their research could be used to exacerbate racial or societal inequalities”.
Ethics committees should incentivise researchers to engage in these “reflexive exercises” throughout their research and scholars might be asked to submit a statement of potential societal impact – both good and bad – prior to submitting an article for peer review or a conference.
Universities should also seek to include more academics from across different disciplines in ethics reviews, while training of ethics review boards should be supported, with staff financially rewarded for their time on these bodies.
The report, which was funded by the Arts and Humanities Research Council, highlighted the need for the “proper consideration” of the potential risks of AI research, said Andrew Strait, associate director (research partnerships) at the Ada Lovelace Institute.
“Traditional oversight mechanisms, such as research ethics committees, are struggling to deal with the scope and nature of these AI and data science risks,” he said.
“Our research, however, concludes that with the right resources, expertise, incentives and frameworks in place, they can play an important role in supporting responsible AI and data science research.”
Niccolò Tempini, senior lecturer in data studies at the University of Exeter and a Turing fellow at the Alan Turing Institute, said that the report would help provide “direct, practice-oriented guidance on how to develop research ethics governance processes that are up to the challenge”.
“Our report aims to help satisfy some of the questions raised and guide local decision-makers in finding an ethically and organisationally sustainable research ethics strategy,” he said.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login