Do narrative CVs tell the right story?

A push to end the habit of assessing researchers by their publication metrics is gaining momentum. But are journal impact factors really as meaningless as is claimed? And will requiring scientists to describe their various contributions really improve fairness and rigour – or just bureaucracy? Jack Grove reports

December 9, 2021
Collage of person having eye test and couple reading a large story book to illustrateDo narrative CVs tell the right story?
Source: Getty/alamy montage

“This is not just a Dutch disease. Everywhere, people are playing this game of chasing journal impact factor,” reflects Frank Miedema, vice-rector of research at the Netherlands’ Utrecht University.

No one was more “addicted” to publishing solely in highly cited journals than he was, he admits – but that addiction was driven by systematic pressures. “I played the game because I knew it’s what I needed to do to become a professor,” he explains. But it is a game, he contends, that is not just skewing hiring and funding decisions in favour of those adept at managing their metrics but is also harming science and limits the potentially transformative benefits that can flow from it.

That point was underlined by a hospital visit Miedema made to see his older brother, who had been partially paralysed by a stroke. Since medical rehabilitation was deemed unlikely to lead to publication in top journals, it had become a “low academic priority” and received “very modest investments” compared with other topics, such as his own field of HIV research. That situation left people like his brother facing a long road to recovery. 

“I could fill many pages with similar problems of agenda-setting being distorted by the incentive and reward system,” explains Miedema in his new open-access memoir, Open Science: the Very Idea.

ADVERTISEMENT

THE Campus views: Researchers, fight back against your struggle with self-promotion


Miedema is one of more than 18,000 individuals and 2,300 institutions that have embraced the San Francisco Declaration on Research Assessment (Dora) since it was drawn up in 2012. The document calls on institutions to “eliminate the use of journal-based metrics, such as journal impact factors [JIFs], in funding, appointment, and promotion considerations” and, instead, to “assess research on its own merits”. It hasn’t always been clear that signatories were backing those words with action, but the recent adoption by several major European funders and universities of “narrative CVs”, some believe, could mark a major turning point.

UK Research and Innovation, for instance, announced in April that it would introduce an “inclusive, single format for CVs”, based on the Royal Society’s Résumé for Researchers initiative, which asks scholars to spell out in 1,000 words their contributions to knowledge, broader society, the research community and the development of other scholars and students. Under that format, launched in 2019, researchers can still list their publications, funding and awards, but these must “fit naturally” within the tight four-part narrative – effectively ruling out long lists of outputs or prizes.

ADVERTISEMENT

“Traditional academic-style CVs” that “emphasise positions and publications” fail to “systematically capture the much wider range of contributions, skills and experiences necessary for a world-class research and innovation endeavour”, the funder explains.

The Résumé for Researchers approach, which has already been trialled by some UK research councils, follows similar moves in the Republic of Ireland, France, Switzerland and the Netherlands. But while many academics have welcomed the reforms, others are less convinced.

The fixed format of narrative CVs – with specific sections on public engagement, among other areas – “forces academics to be good at everything”, believes Raymond Poot, associate professor at the Erasmus University Medical Center, Rotterdam, who co-authored an open letter earlier this year criticising the introduction by the Dutch Research Council (NWO) of narrative CVs into its “Veni, Vidi, Vici” talent scheme.

While leaving certain sections of narrative CVs blank is, in theory, possible, Poot believes that would reflect badly on the individual concerned and that people will therefore try to round out their skill sets: “The reality is that people will want to get good marks on every part of their evaluation.” But he worries that this may see deep expertise sacrificed in pursuit of that all-rounder status.

“If you say, ‘I’m a bit of a nerd who writes just code – but really good, world-class code that benefits my team – and I don’t really do engagement’, that is going to look bad,” he says. However, while it is “fine for people to tell their story and promote themselves”, he believes that they should not be obliged to “say what they do not do”, so as not to discriminate against specialists.

Poot’s letter, which was signed by 172 Dutch academics, including the Nobel laureate Ben Feringa, also poured scorn on Utrecht University’s decision to ban the use of JIFs in its new recognition and rewards scheme, which tilts heavily towards narrative-based assessments. JIFs, which reflect the annual average number of citations garnered by papers in a particular journal, have long been criticised for fuelling what some see as an unhealthy obsession with publishing in big-name journals and for conferring a sometimes misleading veneer on papers published in those journals – some of which can, nevertheless, accrue very few citations.

Yet Poot and others believe that the metric should not be dismissed entirely. They reject what he calls the “misconception that a journal’s impact factor does not correlate with the quality of its publications”. The global experts who often review for top journals, such as Science, Nature or Cell, help to safeguard their quality, he says: “This metric isn’t something invented by a corporate system out of nowhere – it reflects the judgements of many experts across the world.”

In practice, busy grant or selection panels do not have time to delve into the minutiae of every paper referenced by dozens of applicants, particularly if the paper was published in an obscure journal, Poot adds. “Journal impact factor was, at least, a stamp of quality from other reviewers that has now been lost,” he says. “We’ve never said institutions should solely use this metric – it should be a part of a broad evaluation process and I can’t see why withholding a potentially useful piece of information from reviewers would be helpful.”

ADVERTISEMENT

However, it is not unusual for institutions to control how certain types of information are viewed by reviewers during a selection process if it potentially biases them for or against certain applicants, observes Michael Hill, deputy director of strategy at the Swiss National Science Foundation (SNSF). “You would never ask for a photo of an applicant – nor would you ask if they came from an aristocratic family or not,” says Hill. He likens the selective withholding of such information to the jury trial process, in which judges will not, for instance, disclose a defendant’s criminal convictions lest they should prejudice the present trial. “There is good research to show that if you are made aware of someone’s previous funding success, it can have a big impact on evaluations, so there are good arguments about not including everything,” he says.

Collage people in white lab coats with book on table and graph on wall
Source: 
Getty/istock montage

Hill has led SNSF’s pilot of SciCV, a standardised online CV format for those seeking project funding in biology and medicine. It doesn’t allow applicants to list their full publications but instead invites them to describe up to four “contributions to science”. They can link to relevant papers if necessary but can’t reference their JIFs.

“Journal impact factor is a ridiculous measure – it’s just not helpful,” insists Hill, who is also a board member of Dora. The measure was originally intended to help librarians organise where journals might go in the stacks, rather than decide the careers of scientific researchers, he explains. “High impact journals are great – no question – and publish some brilliant papers, but it’s not true that all the best papers are represented there,” he argues, pointing out that some era-defining ideas were not picked up by top journals.

“The paper that started the bitcoin revolution was not even published in a journal – that was published in an email list by Satoshi Nakamoto – and this idea of a digital version of trust is arguably one of the most important arguments in the last 20 years,” he says. “If this person applied for a grant, you might dismiss their idea immediately if you paid much attention to the JIF.” And he believes that the objection that highly cited papers distort impact factors has only been strengthened by recent research indicating that the “avalanche” of citations accrued by Covid-related papers is set to greatly boost the JIF of some virology journals.

Hill concedes that the SciCV method, while endorsed by a majority of scientists, has not been embraced by the entire Swiss research community. “Some evaluators have looked beyond the SciCV at full publication lists, even though this is not supposed to happen,” he admits. Nor was the pilot phase of the format as metric-free as he would have liked. At the insistence of some Swiss scientists, grant applicants were still asked to list their h-index, a hybrid, author-level measure of citations and output, and their articles’ relative citation ratios, a metric developed by the US National Institutes of Health that compares actual citations with the average for the particular field, determined by the paper's references. But Hill hopes that both of these measures will be dropped in SciCV’s next iteration. “We couldn’t do it exactly how we wanted straight away and it will take time to get it right,” he says.

The SciCV pilot has also begun to rebut some of the criticism of the narrative CV approach. One criticism, voiced in a letter by early career Dutch scientists in July, was that the emphasis on “self-marketing” required by a narrative CV would put “scientists from a certain cultural background at a disadvantage”, particularly those where “modesty is the standard” and “valued more highly than the ostentatious presentation of achievements”. Far from promoting diversity – the stated aim of many narrative CV projects – it would privilege insiders who know how the game is played at certain institutions, or those with a predilection to exaggerate their contributions, the letter suggests.

Thrown into “marketing competitions between scientists”, women may potentially lose out, the authors added, given their reported reluctance to self-promote in the same way as men. But textual analysis of the first year of SciCVs shows no significant difference in the way that men and women present themselves – both sexes routinely used words like “expert”, “success” and “first”, says Hill.

Hill also dismisses accusations that narrative CVs are inherently “unscientific” because they force reviewers to make “subjective” choices based on applicants’ narratives. “Subjectivity is not a bug in the system – it is just a feature of any selection process,” he insists. “Even if you had a CV containing just tables of information, reviewers would still need to interpret these CVs.” Narrative versions will, at the very least, allow a conversation to take place in the selection process about the type of excellence being recognised, he believes.

ADVERTISEMENT

“Before we start trashing the new system, maybe it’s worth considering whether the old system is really better,” he says.

Utrecht’s Miedema agrees. “Some people think the [old] system is God-given, but this approach has only happened since the 1980s and has played out particularly strongly in the biomedical world,” he says. Other subjects, such as history and philosophy, have always relied on narrative approaches to CV writing, rather than deferring to metrics-based evaluations, he observes: “Is it helpful to have these [numerical] proxies used so widely in science? In some cases, people get papers in top journals because the last author is really famous or the lab has a good track record. I’ve had papers in Nature that were never cited and shouldn’t probably have gone there in the first place.”

An illustration of how fractious the introduction of narrative CVs can be is provided by the US. There, the National Institutes for Health (NIH), which hands out $52 billion (£38 billion) every year, has required grant applicants to submit a “biosketch” for many years. But in 2015, it introduced a new, expanded format, the “revised biographical sketch”, which was “designed to emphasize an applicant’s accomplishments over bibliometric rankings” by allowing applicants to describe “up to five of their most significant contributions to science, along with the historical background that framed their research”.

The NIH blog that announced the new format attracted some scathing criticism. “As an applicant, spending even more time on non-science-related drivel just makes an already onerous application process more difficult and wasteful. As a reviewer, I can’t imagine how I would use this information,” explained one senior research scientist, adding: “If applicants have published meaningful work in decent journals, this productivity speaks for itself.” Another insisted that “reviewers already give little weight to the bloviating in the current biosketch personal statement section and tend to just use PubMed and Project Reporter to draw their own conclusions about the qualifications of the PI”.

Row of Radio Telescopes man holding bell reading from paper
Source: 
Getty montage

Such objections were still in evidence in a 2017 survey of more than 2,100 NIH applicants and 418 reviewers. Opponents disliked “the increased burden associated with preparing the new biographical sketch, the subjectivity of the new format, and the possibility that some applicants might overstate their own accomplishments”, the resulting report says. There was also concern that the narrative format was unsuitable for early career researchers with more modest achievements to draw on, while “many respondents commented that the new format does not provide relevant information for peer review, or that it presents the opportunity for inflated self‐assessments”. That last objection may be why 42 per cent of respondents thought the revised biosketch would boost their chances of winning a grant, against only 17 per cent who thought it would hinder their chances.

Not everyone opposes biosketches, of course. While some scientists claim that they take up to six hours to complete, and then hours more to tweak and tailor to subsequent applications, compiling them is “certainly not on my top five list of onerous aspects of NIH grant submission”, according to Michelle Mello, professor of medicine at Stanford University’s Center for Health Policy. Moreover, she believes they can be useful. “One thing I like about the latest revision is that applicants are meant to tailor the personal statement to the particular project, so reviewers get a decent sense of how their past experience prepares them to conduct that specific work,” she says.

But the grumbles continue and are rooted in the far higher volume of grant application bureaucracy that US researchers face compared with their European counterparts, explains Gary Cutter, professor emeritus at the University of Alabama at Birmingham’s School of Public Health. For him, the requirement to include a biosketch for every key individual on a grant application is particularly irksome. “I might be named on 30 to 50 or more grants in a year, only about 15 or 20 per cent of which are funded, and usually not on the first submission,” she says. “So my tolerance for the time spent on items that contribute only a little to the decision making is low.”

In his opinion, many reviewers rarely read biosketches, while others “pore over them and notice the typos”, he continues. While he is not against them, he worries that minor breaches of the stipulated format – “the latest change was to mandate that your job history had to be in reverse chronological order and that the education history be in chronological order” – could lead to grants being rejected, imposing a further “burden on already overly managed investigators”.

Of course, reviewers increasingly feel overburdened too, and the NIH’s Center for Scientific Review is leading an effort to simplify review criteria, recommending that the assessment of science be separated from the assessment of the applicant and their research environment. “If that happens, it will open the door to a two-stage review, where the science is evaluated before reviewers see a biosketch,” an NIH spokeswoman tells Times Higher Education.

In addition, the NIH is also piloting blinded review, which would similarly see the removal of identifying materials from the applications initially seen by reviewers. Such a move would not make sense for fellowship applications, “where you are judging both the ideas and the person”, notes James Wilsdon, director of the UK’s Research on Research Institute and professor of research policy at the University of Sheffield. But he thinks that narrative CVs are a useful tool to judge individuals.

“No framework is perfect but the move to narrative CVs is a good one as they represent the multi-dimensional types of excellence we need in universities and research, and they recognise that [metrical] shortcuts to assessing research lead to certain problems around gender and inequality,” he adds.

Back in the Netherlands, the briefer narrative CV that has been introduced is much less onerous to compile or to read than the NIH biosketch. But, according to Erasmus University’s Poot, that is precisely the problem.

“Reviewers are saying, ‘I have an A4 page to judge applicants and it is just not enough to come to a conclusion’,” he says. Worse, he worries that the narrative CV constitutes a “playground for some people’s personal politics”, with advocacy driven by an “ideological” desire to curb the power and profits of the large commercial publishers that oversee most of the top journals.

“I understand [that people] are fed up with the power of big publishers, but we have to be careful that the new system does not harm scientists,” he says, lamenting the lack of detail on what to include in narrative CVs and how they will be interpreted by reviewers and hiring panels. “They have destroyed the old system and are only now thinking about the new system in terms of policy or governance. That is not fair on scientists whose careers are on the line.”

For his part, Utrecht’s Miedema hopes the new narrative CVs that his own and other universities in the Netherlands are adopting will re-energise universities’ and funders’ internal discussions of scientific metrics, which have to some extent been stifled by comparisons of metrics.

“We have to start talking to each other. We’re asking people to argue and explain their contributions, which will encourage papers to be read,” he says. Those conversations will inevitably lead to consideration of the different types of excellence required by university and research systems, he continues. They need not realise Poot's fears of forcing everyone to become all-rounders.

“We need people who can do public engagement or multidisciplinary work [but] if someone just wants to talk about their coding skills and why they are needed in a laboratory with people doing other things, they can argue that too,” Miedema says.

However, he concedes that this valuing of diversity will only occur if university leaders truly buy into it and commit to genuinely rewarding the differing contributions that narrative CVs showcase.

ADVERTISEMENT

“This requires leadership from vice-chancellors and university presidents,” he warns. “There cannot be fundamental change without it.” 

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (3)

The problem in the UK is an emphasis on funding rather than citations. The reason that I have not made it to Professor is that my funding record is rather modest. However, since my work is computational, I have still been able to publish the papers needed to keep my Russell Group employer happy. However, there are thus many instances of promotions where the individual concerned has a worse citation record than mine but brings in substantial amounts of grant income. My career has been that of the all-rounder but this has not done me any good so I think that there is a place for a diverse range of academics and the main aim must be to avoid a "one size fits all" approach.
Just remind me, how used we to assess research outputs 100 years ago, 1921, and if it was different, what in HE has changed to necessitate this shift in assessment regime? And why has it changed? Hmmmmm....
I would like to make a few points here: 1. I absolutely agree with the need to look at the whole picture, especially when we are talking about established senior faculty. You need to be good, or at least able, in all areas, otherwise you are not qualified to be a professor. It always depresses me when I see CVs of 50+ researchers who obviously have done absolutely nothing in their careers except pushing their own research. 2. Having said this, I am absolutely agains this type of narrative CV ideas described here. Without doubt it will advantage only smooth talking white men like myself ( I am not convinced by the evidence presented to the contrary). 3. The IF discussion seems to be very narrow and naive, focusing on a number only. Every time I publish in a high IF journal there so much extra work going into that, and about 90% of that extra work directly relates to the "multi-dimensional types of excellence we need in universities and research". It is about putting your work in a bigger context, valorising it for researcher in adjacent fields and for society at large, and providing more popular content as press releases and follow up articles in magazines and other popular science outlets. And not to forget having all these items scrutinized and quality stamped by a number of highly skilled professionals. 4. While it is true that not all ground breaking research is published in the highest IF journals this doesn’t necessarily put the high IF journals at fault. If you have done ground breaking research it is your duty as a scientist to make it go has high IF as possible in order for society at large to profit faster. But it takes a lot of extra effort, and obviously it is also a bit of risk as the peer review system is what it is, for good and bad. Sometimes ground breaking research gets rejected, just as publishers will reject novels that will become best-sellers. It’s only human.

Sponsored

ADVERTISEMENT