Centralised evaluation of individual researchers will be the starkest challenge for Europe’s renewed research assessment reform agenda, according to national experts.
More than 300 organisations have signed the Agreement on Reforming Research Assessment, committing to broaden and diversify evaluation. An accompanying coalition, which elected leaders in December, will help signatories switch to using mainly qualitative judgements of research, and recognise wider work, such as teaching and leadership.
Returning to peer review from an era dominated by publication metrics is an ongoing challenge for many funders and departments, but for national systems that pre-screen tens of thousands of academics’ work before hiring or promotion, the change is even more formidable.
“The impression that we get is that this reform is going to be very difficult for any country that has a centralised system,” Ismael Rafols, a senior researcher at the Centre for Science and Technology Studies at Leiden University, and an author of the landmark Leiden Manifesto, which in 2015 set out principles on the use of metrics in evaluation, told Times Higher Education.
Dr Rafols said Italy, Poland and Spain all had high-volume systems that would struggle to move away from machine-crunchable bibliometrics. “This is one of the critical points of the new agreement, because when you have to evaluate large volumes of individuals, implementing massive peer review can be a problem: finding the reviewers, handling the conflicts of interest,” said Alberto Credi, vice-rector for research at the University of Bologna.
Fears of nepotism drove the construction of many metrics-based systems and a return to peer review will require trust. “Generally in Poland I believe we have a big mistrust in expert evaluation. This is why we want to cling, some early career researchers also, to ‘objective’, numeric parameters, because we feel like it is individual-bias resistant,” said Jacek Kolanowski, chair of the Polish Young Academy and head of department at the Institute of Bioorganic Chemistry of the Polish Academy of Science in Poznań, adding that he saw more trust in peer review while working in France, Germany and Australia.
Professor Credi said it would take “several years” to build trust in expert evaluation in Italy. To apply for permanent positions, Italian academics must pass through the national habilitation system (ASN), which is based solely on bibliometrics, with the subsequent concorsi competitions, held by institutions, based on peer review.
Italy’s previous universities minister, Maria Cristina Messa, had signalled plans to make the ASN “much lighter and less constraining, giving more importance to the local evaluation”, said Fabio Zwirner, vice-rector for research at the University of Padua, but those plans froze when she left office after September’s election. “It was not clear how she was thinking of realising that, because she was mentioning artificial intelligence tools. I couldn’t disagree more with this approach,” he added.
THE understands that the latest draft of Spain’s updated universities law also softens the role of the national accreditation system in hiring and promotion decisions, giving more heft to local judgements, although publications and conference attendance tallies are currently set to remain.
Decentralisation could bring both countries closer to the ideal set out in the agreement, although aberrations may occur. A September research competition for Italy’s national Covid recovery fund asked applicants to cite their “total impact factor”, a summation based on the journals an applicant published in, and which “does not make any sense to me and many of my colleagues”, said Professor Credi.