Peer review will only do its job if referees are named and rated

We need a mechanism whereby academics can build a public reputation as referees and receive career benefits for doing so, says Randy Robertson

August 14, 2024
Measuring fruit and vegetables
Source: iStock

Last year, a splashy headline in USA Today caught my attention: “Penis length has grown 24 per cent in recent decades. That may not be good news.” Science journalism being what it is, the article links not to the meta-analysis that drew these conclusions but to Stanford Medicine’s advertisement of it. Still, the original article, “Worldwide Temporal Trends in Penile Length: A Systematic Review and Meta-Analysis”, does indeed contend that “erect penile length increased 24 per cent over the past 29 years”.

Hmm. If you’re sceptical, so was I – and, sure enough, looking over the meta-analysis and checking the original studies, I found a few problems. First, while the authors claim to have included only studies in which investigators did the measurements, at least three of the largest they draw on were based on self-report – which, for obvious reasons, often proves unreliable. Second, there was no consistent method of measurement, with most studies not even noting the method used, rendering comparisons impossible. Finally, the authors inflated the total number of members measured. 

In case you’re wondering, I’m not a part of the Data Colada sleuthing team. I’m an English professor at a liberal arts college.

I sent my concerns to the corresponding author and then to the journal’s editor. The rhetoric of their response was fine: the authors acknowledged the problems and even thanked me for pointing them out, which must have been hard. Nonetheless, though they vowed to revise the article, neither they nor the journal editor has yet published a correction eight months on.

ADVERTISEMENT

What distinguishes this case from the raft of flawed studies that critics have exposed in recent years is that this study is a meta-analysis, the supposed gold standard in science. If meta-analyses, which are designed to weed out poorly conducted experiments, are themselves riddled with rudimentary mistakes, science is in deeper trouble than we thought.

The humanities, naturally, are even worse. Historians and literary scholars wrest quotes from context with abandon and impunity. Paraphrase frequently proves inaccurate. Textual evidence is cherry-picked and quoted passages are amputated at the most convenient joint.

ADVERTISEMENT

One lesson to draw, of course, is caveat lector: readers should be vigilant, taking nothing on faith. But if we all need to exercise rigorous peer review every time we read a scholarly journal, then the original peer review process becomes redundant. The least that reviewers should do is to check that authors are using their sources appropriately. If an English professor could see the penis paper’s grave errors, how on earth did the peer reviewers not see them?

Some suggest abandoning pre-publication review in favour of open post-publication “curation” by the online crowd. But this seems a step too far, even in a digital environment, likely leaving us awash in AI-generated pseudo-scholarship.

Better to re-establish a reliable filter before publication. Good refereeing does not mean skimming a manuscript so you can get on with your own work. Neither does it mean rejecting a submission because you don’t like the result. It means embracing the role of mentor, checking the work carefully and providing copious suggestions for revision, both generous and critical. In essence, it is a form of teaching.

The problem is that it is little regarded on the tenure track. Conducting rigorous peer review is unglamorous and unheralded labour; one earns many more points for banging out articles with eye-popping titles, even though a healthy vetting process is necessary for individual achievement to be meaningful.

ADVERTISEMENT

We need to raise the stakes for reviewers by insisting on publishing their names and, ideally, their reports, too, as some journals are already doing. Anonymous referees get no recognition for their labours, but, contrariwise, their reputations remain untarnished when they approve shabby work. Neither encourages careful review. Anonymity should be available exceptionally, for reviewers worried about being harassed by third parties when the topic is especially contentious and for junior scholars concerned about retaliation from seniors.

Optimistically, two natural consequences of public reviewing would be thoroughness and civility. What’s more, peer reviewers would enter into a reputation economy that drew on the power of the networked public sphere. Journals should offer space for readers to comment on published work, including on the published referee reports, helping to sort strong referees from weak ones.

Editors would also have at their disposal a wide swathe of signed referee reports from across their field on which to draw when deciding whom to task with vetting new submissions. As it stands, aside from the habit of tapping personal and professional acquaintances, editors tend to rely on scholarly reputation, handing a few “star” academics disproportionate control over what is published – even though such figures are not necessarily good editors of others’ work, any more than they are necessarily good teachers. Generating and critiquing scholarship require different skill sets.

Editors should not extend invitations to peer reviewers who have repeatedly overlooked flagrant mistakes, as determined by post-publication review. On the positive side, high-quality reviews should count as scholarship, not just service to the profession, as they form an integral part of scholarly production. And if book reviews merit a distinct CV section, so do peer reviews.

ADVERTISEMENT

No doubt plenty of scholars continue to offer valuable peer review, but plenty do not. And it is clear that, in this case, too, it will take more than self-reporting to identify who genuinely falls into which category.

Randy Robertson is associate professor in the department of English and creative writing at Susquehanna University, Pennsylvania.

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (2)

Fully blinded review has always been a myth, as research has established. Competent reviewers generally know or are able to suss out who authored a paper, and editors who assign reviewers always know the full story. Single blind review is even more subject to bias and manipulation. Unblinded review would seem to be the fairest, and, as this article argues, might help expand the pool of qualified reviewers. However, it is not without its risks, the most important being that, in the always highly competitive worlds of academia where technical disagreements all too often degenerate into personal battles, non-anonymous reviewing opens up a whole new jousting field for enacting personal vendettas and professional revenge.
Great article. I agree with most of the claims. Two reactions: 1. The publish-review-curate model has promise. See eLife and the soon to be up and running MetaROR platform for meta-research. 2. We have platforms for post-publication review. Pubpeer I might be the best one. They offer a browser plug-in that highlights papers with posted comments. It’s a must-have for empiricists and those who rely on empirical studies. Kathy Zeiler, Boston University School of Law

Sponsored

ADVERTISEMENT