Unsurprisingly, the new assessment criteria announced this summer for the 2028 Research Excellence Framework have provoked discussion and controversy. Research outputs – in the familiar form of publications and other more or less countable items – have had to relinquish 10 per cent of the total weighting to the far less countable “people, environment and culture” category. Consultation will establish how these slippery elements are to be described and assessed.
Wellcome was one of the first funders to treat positive research culture seriously in assessment, deciding in 2019 to give it an equivalent weighting to scientific excellence in judging which PhD programmes in basic science to fund. The trust has also created opportunities for self-reflective discussions with those involved in the funding calls, now in their fourth year, including Wellcome staff, academic applicants to the initial call and, currently, all staff and students in programmes selected for funding. We have learned many lessons along the way.
In our discussions with principal investigators (PIs) who had submitted proposals, many were excited at the prospect of being empowered to make changes in response to the uprating of good research cultures – and what they most wanted to change were traditional “toxic” supervisory practices. But while funders and researchers alike were confident of their ability to articulate what constitutes research excellence, many researchers were frustrated by the thought of being formally assessed on the basis of such a nebulous concept as how enhanced their labs’ research culture was. Hence, culture’s equalised weighting with scientific excellence evoked discomfort, sometimes anxiety and occasionally even anger.
Mental health is a case in point. It is the most frequently named example of something that falls under research culture, but PIs pointed out that they are not trained as mental health specialists and do not want to be; they feel uncomfortable at having this aspect of research training given such a prominent place in their supervisory role.
However, issues around mental health are inextricably linked with typical scenarios in which research excellence is judged. Review panels are a good example. Cultural questions about them include who the members of the panel are, who they represent, what kinds of biases they may be subject to, which research questions and methods they are likely to prioritise, how conservative they are in their interpretations of the evaluation criteria, and how they reach consensus. The list goes on.
While the people on review panels are not mental health experts (unless that’s the topic of the research), it’s not controversial to say that many aspects of normal human emotional life and mental health come into play in the subtleties of these scenarios: the stresses of power dynamics, the frustrations of not being heard, the pleasures of having one’s judgement affirmed, and so on.
Then there’s the massive impact of the panel’s decisions on those whose research is judged – who have felt compelled to put in an inordinate amount of work on their applications despite the statistical unlikelihood of success. Mental health, it turns out, is not an externality that one has to specialise in. Impacts upon it are part and parcel of the practices of funders and research institutions.
Shifting the balance between research excellence and research culture within a common research evaluation framework requires a better grasp of the connections between the two. The perception that research culture is problematic because it is less amenable to measurable evaluation is a feature precisely of a research culture that needs to be transformed. The narrowly focused quantitative measures of current research evaluation are not born out of purely scientific criteria but are expressions of a problematic approach to research.
As long as groups of humans act together to produce, communicate and evaluate it, research cannot but be cultural as much as it is scientific. We might think of culture as science’s unconscious, shaping our behaviours without our being aware of it. But it needs to be exposed to the light so that it can be reformed through some sort of therapeutic process.
We are not sure that REF 2028 will be that process: it depends on how the evaluation of culture is done. Talk in the “initial decisions” document, published in June, of a “tightly defined, questionnaire-style template” does not augur particularly well, risking making it a tick-box exercise and provoking self-defeating institutional competition around research culture.
A more honest account of an institution’s research culture could be achieved through structured discussions within self-reflective communities of practice, characterised by diversity, flatness of hierarchy and trust, whose members decide among themselves the relevant topics to debate. No doubt organising such discussions would be time-consuming for REF administrators, but we doubt it would be more so than previous REF arrangements. And when what rides on it is the health of both research and its culture, in one and the same measure, the time will be worth it.
Annamaria Carusi is director of Interchange Research and runs the Emerging Research Cultures project, commissioned by Wellcome for PhD training programmes. Shomari Lewis-Wilson is senior manager, research culture and communities at Wellcome.