Peer reviewers: chill out and don’t let the power go to your head

It makes zero difference to reviewers if someone else gets a paper in a high-impact journal, so why are they so pernickety, asks Stephen Cochrane 

七月 4, 2024
Montage of a closeup of the edge of open book pages with a person smiling gesturing towards it to illustrate Peer reviewers: chill out and don’t let the power  go to your head
Source: Getty Images/Istock montage

Research papers are academic currency. For better or worse, they’re one of the first things scientists look at when assessing candidates for research positions or potential new collaborators.

After all, having peer-reviewed research articles implies that someone can perform high-quality research and has a track record in a particular research area. The latter is especially important for securing grant funding, which we need to hire the people to do the actual research given that most academic scientists no longer do bench work.

In the UK, our publication track record is also one way that promotions panels determine our international reputation and research quality. Hence the phrase “Publish or perish”.

A casual observer might assume that since academics are all in this same proverbial frying pan, they would avoid holding each other’s feet too severely to the fire. They might expect a culture of positivity and support to have emerged in peer review. Unfortunately, this is often not the case.

A mentor once advised me that I should aim to review approximately three times as many papers as I publish because each of my papers would have been reviewed by three people and I ought to pay that forward, so to speak. In fact, I do more than this. Maybe I need to publish more or review less, but this considerable involvement in peer review has made me painfully aware of some worrying trends among some fellow reviewers.

My frustration at this led me to vent on X. This was nothing unusual: I vent on X about a lot of things. But while most of my posts attract just a handful of likes, this one caused quite an outpouring of affirmation and agreement.

In it, I simply shared my reviewing mantra: “1) If editor asks me to review, I assume it's appropriate for the journal. No impact factor gate-keeping from me. 2) I assume I'll accept it, unless the authors convince me otherwise! 3) I don't create busy work. If I can't find anything wrong, accept as is.

I believe the reaction highlights the frustration shared by many of us when we get reviewer comments back, particularly in relation to my first point. In my opinion, it is the editor’s job to consider if a manuscript is suitable for their journal. After all, editors of scientific journals are often paid to perform that role: they should not be delegating such decisions to reviewers.

Nor should reviewers be so anxious to take on such decisions. When I review for high-impact journals in my field, I sometimes see other reviewers comment that while the manuscript’s science is sound and well supported by the data, it is not of sufficient impact, significance or novelty to be published in the journal in question.

I assume reviewers are driven to make such remarks by a sense that they are in competition with the manuscript authors to publish in such journals. Perhaps they themselves, as authors, have been the victims of such remarks and don’t see why others should get off more lightly. But I take a different approach.

In reality, it makes zero difference to reviewers’ careers if someone else gets a paper published in a high-impact journal. But it could make all the difference to an early-career researcher in particular. Although many funders explicitly ask grant reviewers not to take impact factors into account, it’s hard to beat back the bias towards a study supported by results recently published in Cell, Nature or Science.

Regarding my second point, I think it’s important to approach a review with a positive attitude. “Accept” is my default position, subject to certain checks. With data manipulation and fabrication on the rise, the first thing I look at is always the raw data. In some cases, it’s immediately obvious that the authors have fallen short of adequately supporting their conclusions, requiring major revisions to rectify.

In most cases, though, the data is sound and supports the conclusions. And while certain things could be better clarified (for me, this is almost always the figures), that only requires minor revisions. Which brings me on to my point three: creating busy work.

Every writer has their own style and, as reviewers, we need to accept this. Some reviews I’ve seen are so pernickety that they are really just creating work for the authors without even improving how the manuscript reads. I now avoid doing this. If I can’t find any errors or problems with the manuscript, I just summarise what I like about it for the editor and recommend publishing as is.

Ultimately, the way I believe we should approach peer review is the way we should approach everything in life: by treating others the way you’d want to be treated. Peer review is a voluntary, unpaid activity, so chill out and don’t let the power go to your head. Provided the science is sound, hit that accept button and brighten somebody’s day.

Stephen Cochrane is reader in organic chemistry and chemical biology at Queen’s University Belfast.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

相关文章

Reader's comments (5)

This is not very helpful to the cause. Surely the task of the reviewer is to see if the paper is up to scratch and suggests way of improving it. This is a service to everyone, including the author. A default "accept" helps nobody.
Agreed. While I rarely recommend rejecting an article outright deciding what minor or major changes to recommend are an important part of being a reviewer rather than just a default accept. There will also be cases where the editor can not judge the importance of the key outcomes of the article (performance level of a material for example and how this compares to the rest of the field) so more expert advice is needed on suitability of the article for that journal, this is reflected in most chemistry journals but needing to indicate the article is in the top x % of the field.
These responses are interesting to me because they don't seem to me to address the main thrust of Cochrane's piece. "Treat others as you would like to be treated" should be in every journal's guidelines to reviewers. We have all seen people hide behind the anonymity of reviewing to say gratuitously nasty things about a paper. A default setting of "accept" doesn't mean you accept everything. It means avoid the temptation to create what Cochrane calls "busy work", or the temptation to ask someone to write the paper you would have liked to write with this data, instead of engaging with and commenting on the paper they did write. He also notes that he always starts with a look at the raw data and rejects if there are problems there. I can't see anything objectionable in what he's recommending.
I enjoyed reading this article and agree with most of its sentiments. As Miriam Meyerhoff suggests, a Golden Rule approach such as "Review the work of others as you would expect your work to be reviewed" is in order. I would like journal editors to make their expectations clear by indicating that reviewes are expected to be constructive, honest, ethical and scholarly. For instance, all conflict of interests must be declared (including suggestions made to cite work published by the reviewer), and all critiques of the reviewed work should be justified in a scholarly manner. Wholesale dismissal of papers without proper substantiation happens far too often and is highly detrimental to the careers of reseachers whose work deserves respect. I also think that reviewers should be challenged to think about how thier review might be received if it were made public. Could they stand behind it if their name was linked to the review? Finally, I would encourage editors to dismiss reviews that don't meet a reasonable standard and avoid sharing them with authors.
I like the article and would generally support its main points with the addition that my main concerns are with reproducibility of data, if authors take shortcuts or do cherry-picking, overstate their results or make claims of novelty without acknowledging properly the prior art in the field.
ADVERTISEMENT