Strengthening Research Integrity: Improving peer review in publishing

As peer review week begins, Michael Barber considers some recent advances to improve peer review processes in the scholarly publishing system.

The theme of Peer Review Week 2022, ‘exploring the importance of peer review in supporting research integrity,’ is timely. The COVID-19 pandemic demonstrated the power of science. While the rapid dissemination of scientific findings was critical, this was accompanied by significant risks. The Australian story of hydroxychloroquine is a salutary (but unfortunately not the only) example of how poor science, poorly reviewed and communicated, can fuel misinformation and cause actual harm. Such intense public and political attention has increased focus on the processes and practices that underpin research integrity, of which peer review is critical.

Traditionally, peer review has been regarded as the ‘gold standard’ of scientific publishing. However, even before the issues raised by the pandemic, the effectiveness and sustainability of peer review were being questioned. Retractions of peer-reviewed publications are increasing. Many, such as the infamous Wakefield case and the recent Lancet hydroxychloroquine scandal, involving serious fraud, should have been picked up on peer review and never published. They represent significant failings of science and its quality assurance system.

A recent occasional paper, Strengthening research integrity: The role and responsibilities of publishing, of the International Science Council’s project on the future of scientific publishing, made two recommendations to improve pre-publication peer review, thereby strengthening research integrity in the published record:

  1. Mandate the co-publication of data. Many retractions are based on data issues. Universal adoption of the co-publication of a paper and the underlying data would be a significant reform and certainly eliminate the most egregious cases of data fraud. Recent policy guidance from the Office of Science and Technology Policy (OSTP) in the US requiring US government-funded research, including data, to be freely available on publication by 2025 should be a game-changer. Research culture will also need to change: a recent analysis of researchers’ views on data sharing concluded that ‘many researchers say they’ll share data — but don’t.’ Journals and peer reviewers should urgently begin to insist that acceptance requires the co-publication of underlying data. As argued by Tim Vine, Founder of DataSeer, enabling the scrutiny of data could reduce the potential of data fabrication and make honest research ‘the path of least resistance’.
  2. Increase focus on the potential for replicability/reproducibility. As practised by most journals, peer review attempts to balance assessments of the “novelty” or “importance” of the results and the methodological competency of the reported study. Research integrity can suffer when novelty becomes too predominant. Referees should, to the greatest possible extent, ensure that enough detail is given in the paper under review of the methods involved and, if necessary, data availability so that the work can be replicated and the results confirmed or invalidated. Peer review before publication is rarely, if ever, a final or definitive verdict. A 2019 report from the National Academies in the US emphasized: ‘Repeated findings of comparable results tend to confirm the veracity of an original scientific conclusion, and, by the same token, repeated failures to confirm throw the original conclusions into doubt.’ This and other forms of post-publication peer review are fundamental to the process of science. Yet too many examples are emerging, such as in this example from bioscience research, to suggest that peer review before publication fails far too often to ensure this is even potentially possible. Reviewers should also actively recognize the importance of replicative studies and null results and encourage publication.

The appropriateness of statistical tests is often an issue, particularly for complex data. Yet, many editors do not use statistical experts to evaluate submitted papers. Other reviewers should not hesitate to recommend such a review.

Technology has a role to play in improving peer review. While automatic screening tools cannot yet replace human reviewers, they can support them and should be more actively utilized. However, one of the most constructive uses of tools such as ScreenIT, is by authors before submission to improve their paper, including assessing reproducibility, as in this example of a screen of COVID-19 preprints. It would not be unreasonable for journals and reviewers to ask if a submitted paper had been through such a screen. Asking authors to complete a pre-submission checklist would be another helpful development. Such checklists would need to be adapted for disciplinary differences, but again the intent should be to encourage better practice rather than detect malpractice.

Other reforms of peer review are possible and indeed to be desired. Open peer review in which reviewers do not retain anonymity is one option. A 2020 study concluded that

Open peer review … reduces bias, enhances the transparency and reliability on the refereeing process. They make the reviewers accountable for the acceptance of articles for scholarly publications.

The development of peer review taxonomies warrants active consideration. Such taxonomies could allow more objective ranking or badging of journals and their peer review practices and quality.

Ultimately a more fundamental — and indeed radical — reform of scientific publishing is required to deliver a system in accord with the eight principles articulated by the International Science Council. That will take time and considerable cultural change. Until that happens, journals will remain essential for disseminating scientific results. It is thus imperative that all — authors, reviewers and journal editors — do everything possible to improve peer review and ensure that the published record is as trustworthy as possible. Peer Review Week 2022 is an opportunity to advance this critical agenda.

Michael N. Barber

Michael N. Barber is Emeritus Professor, AO, FAA, FTSE, and a member of the Steering Committee for the International Science Council’s project the Future of Scientific Publishing.

Photo by Jason Wong on Unsplash


Skip to content