As I sat in a dark lecture theatre waiting for a conference session to begin, a colleague walked up to me and whispered: “Do you still think scientists are good people?”
She was reminding me, jokily, of a conversation in 2004 during which I, as a PhD student, had smugly bragged that UK higher education was morally superior to the pharmaceutical industry, for which she had just left academia.
In truth, I have spent my academic career thus far avowedly believing in the idea of the “moral academic”. But I’m becoming increasingly sceptical.
After all, what does scientific morality really amount to? Most universities in the UK have world-class “ethics in research” courses, which medical students and postgraduate research students and staff are obliged to attend. But these are narrowly focused on experimental and publishing rectitude, tackling issues such as consent, randomisation, blinding, reproducibility and bias. And despite the ubiquity of such training, many of us know of scientists who have faked results, cooked their statistical analyses, refused to share data, or withheld results – not to mention bullied, sexually harassed, discriminated against minorities and committed all manner of other breaches of workplace ethics.
Then there are all the ethical implications of our research, which we rarely even consider. Silicon Valley is widely castigated for unleashing all manner of game-changing algorithms on the world without giving a thought to whether they will actually make it a better place. But, in my experience, university scientists give such considerations similarly short shrift.
Scientific methods, tools and technologies are becoming more sophisticated and powerful. And the scientific challenges facing us are increasingly threatening and controversial. Throw in the demand to publish high-profile papers and capture large grants, on top of the human desire for fame and fortune, and you have a tinderbox of moral dilemmas.
A further complication is that universities are becoming ever more ethnically, demographically and ideologically diverse. Many scientists have been raised in communities, in the UK or overseas, that have diverse systems of beliefs, religions and spirituality, and it is not controversial to say that such backgrounds often remain strong influences on them.
Many people from these backgrounds struggle to negotiate a wide array of issues in universities, ranging from identity, religious values and tolerance to genetic engineering, cloning, stem-cell research and AI. Yet these struggles are rarely acknowledged. It is widely assumed that the ethical values of all scientists in the UK are Anglo-European ones – and, therefore, that a one-size-fits-all, Western-centric approach to teaching ethics is all we need. As a London-born scientist of North African heritage, I dispute that.
A case in point is a PhD student I know, also of North African origin. He was well acquainted with standard experimental ethics, but he struggled with complex and multidimensional ethical questions relating to performing an experiment, demanded by reviewers, that involved knocking out a gene (some Islamic scholars question the legitimacy of genetic manipulation). “I did not sign up for this!” he exclaimed. “I ended up having a chat with the Imam at my local mosque.”
The worst part is that most scientists who teach research ethics have no moral education and limited training for the role. This is not a criticism of them: very few scientists have the time for that. But we can and must do better.
Universities are not short of experts in morality, after all. Apart from occasionally advising the research ethics committees, they are usually confined to the humanities faculty. But it is not novel to say that they should become more involved in scientific education. As long ago as 1986, the late US philosopher and academic Michael L. Martin argued that “science education and moral education are mutually relevant and should be combined”.
We need humanities academics – including those from a range of non-Western backgrounds – to teach scientists moral values, such as concern for people, empathy, respect, tolerance and kindness, as well as addressing concerns about new technologies from cultural and religious perspectives. They should be enlisted to curate and teach additional courses in ethics that suit the needs of our increasingly cosmopolitan community by speaking to our common humanity.
Many scientists detest the idea of philosophers as the “guardians of ethics”. However, I am dog tired of those who dismiss – usually as victims of impostor syndrome – any scientist who believes that scientists alone cannot be trusted with the moral education of tomorrow’s leaders in our field. Such responses are emblematic of a culture of arrogance. Even Open AI, the maker of ChatGPT, is assembling academically mixed “red teams” to ethically test and educate its latest machine-learning model, GPT-4.
The British public has an immense trust in science, and scientists such as the government’s chief medical adviser Chris Whitty were held in high regard during the Covid-19 pandemic. But we should be concerned by rising negative perceptions and anti-science sentiments. Advances in fields such as genetic engineering and morally loaded research in areas such as neurodiversity and virology will unnerve people and risk sparking grassroots backlashes. If people are to continue to support science, they must not doubt our integrity.
No one expects scientists to be perfect, of course. But if we do not find ways to counter the narrative that science is broken, in crisis or, worse, corrupt, the public will think that academic scientists are no better than the self-serving pharmaceutical executives who instigated the opioid crisis in the US. This serves no one’s interests.
Aymen Idris is a researcher who chairs the Africa in Science (AiS) thinktank.