The emergence of Covid-19 is testing the limits of many global systems, and not the least among them is the quality control system for academic preprints.
On 31 January, a paper was released on the bioRxiv preprint platform suggesting similarities between the Sars-CoV-2 virus and HIV. A stream of comments on Twitter and bioRxiv soon followed, questioning the methodology and conclusions, and the paper was withdrawn on 2 February. The withdrawal note stated that the authors “intend to revise it in response to comments received from the research community on their technical approach and their interpretation of the results”.
The combination of open digital platforms, the hive mind and a pressing health emergency resulted in an extraordinary situation: authors receiving useful feedback on their work within two days. You could argue that it was a case of community-based peer review coming of age.
After all, in times of crisis, the time it takes to get a paper through traditional peer review can be a significant drag on addressing the issues at hand, even when journals make efforts to speed it up. By contrast, preprint servers allow information – unrestricted by text limits or demands for complete narratives – to be communicated immediately, allowing interested parties to read, analyse and give feedback in real time, unmediated by publishing houses.
This ability for anyone to share anything and for anybody to comment is a welcome step towards the democratisation of research, and concerns about the provenance of ideas are being reduced by public date-stamping.
Needless to say, however, there are significant risks associated with such an unregulated process. The Covid-19 case could equally be taken as illustrating the dangers of allowing material into the public domain without third-party screening. Indeed, above the withdrawal notice for the Covid-19 paper is a note stating that “bioRxiv is receiving many new papers on coronavirus 2019-nCoV” and reminding readers that “these are preliminary reports that have not been peer-reviewed. They should not be regarded as conclusive, guide clinical practice/health-related behavior, or be reported in news media as established information.”
It is accurate to say that community-based review is currently unable to prevent the release of erroneous material into the public domain. Errors are made in data released and in comments given, intentionally or not. But that problem is not exclusive to community-based review; although traditional peer review has checks in place to reduce the risk, it is not foolproof.
Still, if the benefits of speed and inclusivity offered by community-based review are important to the sector then a solution to the quality-control issue is needed. The current process whereby people report erroneous material to authors or platform hosts, who then have the responsibility and prerogative to withdraw the content, is flawed. It is worth noting that even the biggest, wealthiest social media platforms still struggle with preventing unsuitable (offensive) content from being released. Human moderators are employed, but their capacity is necessarily limited. AI tools are being developed by Instagram and Twitter to detect more than just keywords, and scanning preprints will require similarly sophisticated algorithms.
Nevertheless, the popularity of preprint platforms, the exchanges they support and the power of the hive mind are providing a window into future possibilities. The Covid-19 case is just one particularly striking example of experts’ willingness to take time out from their own work to comment on preprints despite the absence of any tangible incentives in terms of professional recognition. Imagine what could be achieved if publishers, funders and employers were willing to offer recognition – perhaps by upgrading such contributors from the acknowledgements section to a category of “contributors whose input significantly impacted the work”. Innovation in incentive systems would help to ensure that the “right” people were involved at the right time. Ultimately, it might even make traditional, publisher-administered peer review obsolete.
As technology continues to evolve rapidly, the tardiness and exclusivity of traditional peer review are becoming ever more inappropriate, particularly for imminent or current crises. There is certainly a place for lengthy and exclusive review, publication and retraction. But this should complement rather than exclude a faster, decentralised process.
Kristen Sadler is an independent adviser. Until 2018, she was research director for strategy and biosciences in the president’s office of Nanyang Technological University, Singapore.
后记
Print headline: When speed is of the essence