Richard Smith says that when he was editor of the British Medical Journal, he deliberately inserted eight errors into a paper and sent it to 300 reviewers. None of them spotted more than five errors (“Ineffective at any dose? Why peer review simply doesn’t work”, Opinion, 28 May).
I was one of those reviewers, unaware at the time that this particular paper was part of an experiment. I sent a covering letter with my review that went something like this: “This paper is not only unpublishable in its present form; it contains a number of fatal flaws that suggest it will never be publishable. I list below three of those fatal flaws. If you would like me to take a closer look at the paper to provide a more comprehensive set of feedback for the authors, please let me know.”
The BMJ’s staff – then as now – viewed peer review as a technical task (“spotting errors”) rather than a scholarly one (interpretation and judgement). They never did ask me for a more detailed list of errors. Presumably, taking account of authors’ covering letters was not part of their study protocol. I am sure many reviewers would have done what I did: fire a quick paragraph to hole the paper below the waterline and return to their day job. Smith’s conclusion is therefore not the only one that can be drawn from his dataset.
Like all experiments, artificial studies of peer review strip the process of its social context. They contain the ossified assumptions of the study designers – and for that reason they are biased in a way that will tend to confirm the authors’ initial hypothesis.
Trisha Greenhalgh
Professor of primary care health sciences
University of Oxford
Richard Smith’s argument about the ineffectiveness of peer review is weakened rather than strengthened by this claim: “With the World Wide Web everything can be published, and the world can decide what’s important and what isn’t.”
Peer review is flawed, but it is naive to claim that the internet alone will bring scientists together to present and critique their studies. This is demonstrated by the arXiv preprint server for physics and mathematics research. arXiv is not peer reviewed. There is good and bad there, but one must know enough about the subject matter to distinguish between the two.
The root of the problem is ethical integrity. Without honesty, neither peer review nor online vetting will be effective.
Lisa E. Wells
Via timeshighereducation.co.uk