Disclaimer
The article originally appeared in Learned Publishing Volume 38, Issue 3, authored by Damian Pattinson and George Currie, and republished with their permission. The information, opinions and recommendations presented in this article are those of the individual contributors, and do not necessarily reflect the values and beliefs of the International Science Council
The majority of scholarly communication today depends on publishing. An industry with estimated profit margins of between 30% and 50% (Van Noorden 2013), scholarly publishing has long been on a trajectory of consolidation, with 2022 estimates giving the top five publishers control over 60% of the market (Crotty 2023).
Through the medium of the journal, scholarly publishers play an integral role for scientific communities. On the one hand, journals need to provide value to their customers—authors (through APCs—article processing charges), or readers (through library subscriptions)—and on the other, they are incentivised to maximise profitability and to outcompete other journals. While the incentive structures at play for publishers are primarily commercial, all scholarly publishing has to exist in the same system, face similar considerations and play the same game by the same rules.
The interests of scholarly communication and publishing are not always compatible. What’s good for publishing isn’t necessarily good for science, and successful publishing strategies may be actively harmful to the scholarly record.
You might also be interested in:
Since 2019, ISC has championed reform of the scientific publishing system, establishing itself as a trusted advocate for the scientific community and forging a vital network of partners working toward similar goals.
Learn more about our project, the ISC Forum on Publishing and Research Assessment.
Science-led publishing is an opportunity to realign the current processes and reward systems in publishing and research to first and foremost benefit the scientific endeavour. It demands faster, fairer and more transparent modes of science communication. It is not an unachievable ideal; it is a choice within our current reach.
Science-led publishing means two things. First, the needs of science communication dictate how publishing processes and models work, what options are available to researchers, and how researchers are incentivised—how success is measured—by funders and institutions. Second, it is not an end state. Science-led publishing must continually reevaluate itself so that it best serves the current needs of researchers and research within current social and technological boundaries.
An example of this is how, despite technological advances, much of scholarly publishing still operates as it did in print. Where the print medium demanded works be final before they are shared, digital publishing allows works to be shared, reviewed and revised iteratively and publicly. This change could be relatively straightforward within our current technological limitations, and is already in place for some journals, yet much of the system exists in an inertia—why?
Publishing is a service, it should facilitate scholarly communication. However, the commercialisation of science has led to profit- rather than purpose-driven structures and systems dominating (Buranyi 2017). What science gets seen, and—because of the ‘publish or perish’ pressures on researchers—what science gets done and how it is presented (Fanelli 2010), has been distorted by what is profitable for publishers. This is not unique to commercial publishing—all publishers face these same pressures and incentives, and must compete to survive in the same system.
This commercialisation has created a system where behaviours and actions that benefit publishing are rewarded, whether or not they benefit science—and in some cases even if they are to its detriment. We see this play out in the existence of publication bias toward positive results (Easterbrook et al. 1991), that more interesting results seem to be favoured over less interesting but more reliable results (Serra-Garcia and Gneezy 2021), and the ever-increasing volume of published research (Hanson et al. 2024).
Publication bias toward positive results of perceived high interest is a hangover of the dominance of subscription-access journals where journal brand and status were primary drivers of revenue. Articles reporting positive results are more likely to be cited (Duyx et al. 2017; Jannot et al. 2013), therefore contributing to prestige metrics such as the Journal Impact Factor and in turn increasing journal brand equity, allowing higher subscription charges.
In the article-based economy, this has translated into citations driving APCs upwards (Schönfelder 2020). Though it is interesting to note that when other measures of impact are considered alongside citations, there is little correlation between the cost of publication and eventual impact (Yuen et al. 2019).
The increasing volume of published research is a more recent example of questionable publishing behaviours driving changes in scholarly communication. APC-model publishing means that journal revenue is tied to publication volume and increasing that volume is an effective engine for publisher growth (Mellor et al. 2020; Nicholson 2025). Because of this, rather than research being rejected outright, it is often redirected to other journals within a publisher’s portfolio through journal cascade systems (Davis 2010). While this can offer time savings for authors, it also helps ensure potential revenue is not turned away.
The impact of this sequence of priorities filters backwards to influence research decisions (Ramassa et al. 2023), the analysis of results (Head et al. 2015), how researchers choose to present those results in journals (Gonzalez Bohorquez et al. 2025), and even to pervert the scholarly record with poor-quality or fraudulent research (Parker et al. 2024). It is detrimental to science that journal publication is of such great importance in research assessment, research funding, researcher assessment, researchers’ careers, and, because of the latter, their very livelihoods (Rawat and Meena 2014; Marcum 2024).
When publication underpins so many facets of an academic career, academics must work toward goals that align with publication, rather than those that align with good science. When journals place requirements of novel, impactful, and positive results on publication, this becomes both the threshold of the scholarly record and of academic career success. When journals decide certain research or certain results are of less value to their publications, they in turn become of less value to authors.
Publishers have built and entrenched power in their relationship to research through their role in the quality assessment of research (Neff 2020), meaning in practice the administration and control of the editorial and peer-review process. While journals are editorially independent and publishers do not perform peer review themselves—instead they are reliant on the often free (to them) labour and expertise of editors and reviewers—publishers do exert influence on the process. This can most clearly be seen when journal editors disagree strongly with pressures from the parent publisher (De Vrieze 2018), as often the only protest available is to withhold labour. Mass resignations from journals appear to have become more frequent in recent years (The Retraction Watch Mass Resignations List 2024).
The current system of publishing and peer review slows down science communication. Finding reviewers and conducting reviews takes time. Research can then be stuck in peer review for months with no guarantee that it will eventually be published. When research is rejected during peer review, the clock is often reset at a new journal. This means science progresses more slowly than it could.
Science-led publishing enables faster scientific communication and expedites sharing and refining ideas and approaches ahead of formal review. The preprint becomes the standard research article type using existing infrastructure that is free for authors and readers.
The role of preprints in accelerating the search for a COVID-19 vaccine is a compelling example of the need for faster science (Watson 2022). Even in more routine cases, it is not an exaggeration to say these delays cost lives (Sommer 2010). In our current publishing system, peer-reviewed research comes with immense costs. These can be quantified in APCs and subscription charges, and in reviewers’ and editors’ time, but also in the cost of delaying research advances.
Despite the special significance given to peer-reviewed articles over non-peer-reviewed research, studies suggest that around two thirds of preprints (Abdill and Blekhman 2019) or more (Gordon et al. 2022) eventually get published in peer-reviewed journals. This percentage may even be an underestimate as some papers could have taken longer to be published in journals than were recorded in the timeframe of this study and there could be false negatives due to title changes.
The differences between the preprints and the peer-reviewed articles are seemingly minor, with various studies showing there are minimal changes to conclusions of a paper (Brierly et al. 2022), the quality of preprints, while slightly lower on average, is comparable to that of peer-reviewed articles (Carneiro et al. 2020), and that articles change very little as a result (Klein et al. 2019). This suggests that most preprints could be of near-equal value to peer-reviewed journal articles before any revisions are made to them. Current forms of peer review create significant delays for seemingly marginal gains.
Then what of the remaining 30% or so of preprints that do not eventually see publication in a journal?
A 2023 study found preprints published from low-income countries are later published in journals at a lower rate than preprints published from high-income countries. Rather than this being a question of research or article quality, the authors draw on additional studies that suggest a lack of resources, lack of stability, and policy choices (Eckmann and Bandrowski 2023) are factors that cause preprints to not later appear in journals. It seems likely for some of the remainder, it is not a question of research quality but a question of means.
It is sensible to bring a critical perspective to anything you read no matter where it is published or by whom. However, given the unreliability of journal publication as a mark of validation, that the majority of preprints eventually see publication in a peer-reviewed journal, and that by and large the improvements made during peer review are slight, there appears to be little reason to assume preprints are inherently less valuable than peer-reviewed articles.
Faster publication means that research findings can have a more immediate benefit to research and the public. Experts can continue and build on ideas earlier than they would have otherwise been able. Scientific progress could be significantly accelerated for a minimal change in perceived quality of outputs.
If the value of preprints is a question of what can be trusted, does the peer-review process prevent publication of untrustworthy research? Is it a filter and is it a good one?
As a general principle, it is difficult to argue against the idea that work that has been reviewed by independent experts should warrant higher degrees of trust. Conversely, it is easy to see how a process that intends to challenge knowledge and ideas can help improve them, or to show when to disregard them. However, in many cases, peer review today has become little more than an industrial process that helps safeguard journal status through notions such as novelty or impact rather than to enhance research. This focus is not helpful to science; it is helpful to publishing.
There is little evidence that peer review works as expected—that it validates research (Jefferson et al. 2007). The binary accept–reject decision means peer review has taken on ‘more of a judicial role than one of critical examination’ with a focus on the decision rather than the process and little justification for the decisions (Tennant and Ross-Hellauer 2020; Hope and Munro 2019).
Given peer review’s role in the modern scientific endeavour, it is ironic to find it described by prominent journal editors as both a ‘faith-based system’ and a deeply flawed ‘quasi-sacred process’ (Smith 2022; van der Wall 2009).
Rejection during peer review can happen for a number of reasons that have nothing to do with the quality or trustworthiness of the research. Reviewers may reject articles because of a perceived lack of novelty, because ideas challenge norms and received wisdom, because the research undermines or disputes previously published ideas (or the reviewers’ own research and ideas). It also opens the door to all kinds of bias that, in the very opaque system of anonymised and closed peer review, are hard to identify and root out.
Journal cascade systems, where rejected research is redirected to lower-status journals, can be seen as an admission that peer review isn’t just about keeping bad research out of the scholarly record. It is instead pushing it around based on journal status and brand. In each of these cases, the rejections may exacerbate publication delays by months while providing no benefit to science, only protecting the interests of the journal.
Traditionally, journals and by extension publishers built their brands by what they kept out. In the predominantly subscription-access world, scarcity and exclusivity drove profitability. Instead, in the APC era it is volume (Sivertsen and Zhang 2022). Despite this seismic change—perhaps the most fundamental from a publishing perspective is the change of who is the customer—the problems of the previous model still exist. But research now faces a new challenge. APCs mean every single article, no matter its merit or quality, has monetary value to publishers. Every article rejection is lost revenue.
The publish-or-perish reality for researchers meets the publish-to-profit motive for publishers. A perfect storm that has allowed publishers to exploit researchers’ need to publish, has allowed a black market of research to become prominent (Zein 2024) and—thanks primarily to the efforts of independent research integrity sleuths—has seen the retraction of over 10,000 papers in 2023 (Van Noorden 2023) (It’s worth considering that retracted papers are only the papers that have been investigated and found to be suspect, it is unlikely that this is the true extent of the problem.).
If peer review is intended to filter bad research, it has failed. The accept–reject decision is today under ever-increasing pressure to be corrupted. While undoubtedly peer review does catch issues, by and large it does not prevent research from publication; it instead stratifies it according to a journal brand hierarchy. The value of faster research communication is greater than the value of peer review when used as a threshold.
There is still immense value in peer review, but not as a mechanism to filter or control what gets published. The value of peer review is that it is seen, it is shared, and it becomes an intrinsic part of the history of an article.
Preprints and publish, review, curate (PRC) publishing models both enable faster communication of research. Taking days or weeks rather than months or years. Critics of preprinting might warn of the dangers of unreviewed research. Yet, as discussed above, it is clear that most preprints eventually appear in journals, improvements made during peer review tend to be slight, and there is ample evidence that the peer-review process does not prevent publication of questionable research.
Expediting publication ahead of review makes work available to experts in the same field faster—these experts are able to assess the quality of the work themselves without waiting for peer review. Making reviewer commentary public when it is available helps interdisciplinary experts and lay readers to better understand if or where the strengths and limitations of the research lie and provides additional context for experts.
By removing the gates and exposing the process, peer review can be refocused to aid collaboration, cooperation and critical thinking, rather than serve as judgement.
Science-led publishing changes the relationship between authors, editors and reviewers to one of collaboration rather than control. Authors have more choice in how and when they publish. Reviewer recommendations are advisory rather than the cost of acceptance. Editors provide expertise, guidance and facilitation.
Using peer review as a method of filtration means reviewers are tasked with not only providing constructive recommendations to the authors but also deciding whether to recommend publication or not. This creates a power dynamic between reviewers and authors that may not be entirely helpful to the authors or benefit science.
Reviewers’ recommendations may be actioned not because authors agree or think the recommendations add value to their article, but because to not act on the recommendations might prevent publication and waste the time and effort already invested. Because publication can have such a profound impact on a researcher’s career, future funding, and even just being able to move on to the next project with a clean slate, there are plenty of incentives to bow to this pressure.
Removing the threat of rejection from the review process allows it to become a truly collaborative process. Reviewers are freed to focus only on how to help guide improvements to the research in front of them.
In decoupling review from publication decisions, authors become partners in publication and act with the reviewers and editors rather than being acted upon. Authors can revise, or not, their manuscript without the threat of rejection; they can take the best of what reviewers offer without feeling beholden to advice they disagree with. The focus is to make the work as good as it can be, not to pass a publication threshold.
Authors have more certainty and security in the process. Their publication is guaranteed, they will not waste their time needing to start again elsewhere, and it is easier to plan around deadlines. The valuable contributions of editors and reviewers become part of the record of the work and are surfaced to readers rather than being part of the publishing black box.
Science-led publishing prioritises transparency of approach and outputs. Research is made available freely to readers; sharing of underlying data and code becomes the norm. The work conducted during peer review is made available alongside research to help inform readers, to kickstart discussions, and to prevent the waste of these contributions.
Closed peer review is still the norm, minimising the value it could provide. In the case of rejection during peer review, this likely creates a need for work to be duplicated entirely.
Our current standard practice of peer review is incredibly wasteful. The labour researchers gift to publishing—estimated to have been in the multi-billion dollars in 2020 (Aczel et al. 2021)—is a significant outlay of time, resources and effort, that at best we are not realising the full value of, and at worst we are completely squandering. Making reviews part of the scholarly record and inextricably linking them to articles would reduce the cost caused by repetition of peer review and share the value of that work with readers, editors and future reviewers.
The outcomes of peer review should become a publicly available and intrinsic part of a piece of research. When presented alongside research, peer review can help provide important context for readers around the strengths and limitations of a paper. In making this process transparent, the focus can be on sharing expertise, encouraging debate, and embedding accountability into the entire process for all participants. When peer review happens behind closed doors, it is not clear what is really happening or why decisions are being made.
Reviewers’ recommendations to authors should be left to the authors’ discretion and not become reasons to reject a paper if they are not followed. If review feedback is available to readers as an integral part of the article, authors can be much freer in what feedback they choose to implement and how, and they can acknowledge where feedback is useful but impractical. Peer review can become an honest exchange of ideas rather than a threshold to be passed at any cost.
Despite open-access publishing becoming increasingly common, approximately half of research is still paywalled (STM OA Dashboard 2024). Scholarly communication still has further to go for what is a basic expectation: the ability to read research relevant to your own investigations. Just as publication delays access, paywalled research inhibits progress and costs lives (Torok 2024; Kostova 2023).
While APC-funded Open Access helps level the playing field in terms of readership, it creates inequities in who is able to publish. Waivers go some way to address the immediate problems caused by APCs, but charity is not equity (Folan 2024). Giving preprints, a cost-free means of research communication for both authors and readers, the recognition they deserve could help address this imbalance. In a system where free options perform the same functions as paid, those with paid services will have to be very clear about the value they offer.
Alongside increasing access to research articles, research communication would benefit from a culture that is more comfortable sharing other research outputs, such as data, code and executable files, and providing the infrastructure with which to make this possible.
Science-led publishing reshapes the relationship between publishers, researchers, indexers and institutions. Rather than research being judged on where it is published, the content of research is evaluated publicly. Open reviews and publisher curation statements form a history of each publication. Version histories encourage iterative improvements of research rather than final versions of record. A journal thrives not on the perceived quality of its publications but on the publicly demonstrated quality of the reviews it facilitates.
We already have the technology to facilitate open and iterative reviews, yet the scholarly communication system continues much as it did when print was the height of communication technology. That said, the number of journals adopting publishing models where preprints are reviewed, where research is shared before revision, and where review commentary helps inform readers is growing. Many of these are offering interpretations of publish-review or publish-review-curate models (Corker et al. 2024) such as MetaROR, Lifecycle Journal and eLife.
However, because so many aspects of research and researcher assessment depend on the traditional markers of prestige, engaging with new and innovative models can be seen as a risk for researchers, even for those who support them. These models do not fit neatly into the frameworks that these markers of prestige are born from. If these models are to succeed, then the purpose of those journal-based markers will be heavily diluted. It is therefore in the interests of those who control those markers that models that would lessen their power should not succeed.
eLife’s (where we both work) Impact Factor was removed late in 2024 due to Web of Science’s position that the eLife model does not validate research.
We would contend that this means of journal validation is deeply flawed and unreliable and that in sharing reviews and assessments publicly, as an intrinsic part of a research paper, the paper is validated to the extent indicated in those reports. While one institution might adopt progressive policies toward research and researcher assessment, career progression and funding, eschewing journal names and metrics, as long as other institutions still place meaning on those markers, researchers may still feel the need to prioritise them in case they might be useful later.
As discussed earlier, this influences the research itself, the need for publication or desire for high-status publication is deeply entangled with the reality of what knowledge is added to the scholarly record (Gonzalez Bohorquez et al. 2025). Publication is such an important currency of academic careers and success that researchers even choose to publish in predatory journals (Kurt 2018). This publication culture is so deeply ingrained it is difficult for researchers and publishers to consider that it need not be this way.
To create a system that benefits science, we must create a system that ensures actions that do not benefit the research endeavour are less profitable than ones that do. There are two primary levers for this: how research is funded and how research is evaluated.
A first step toward this is that institutions and funders, and any other form of research or researcher assessment, exclude journal metrics, and even journal names, from any kind of evaluation or prerequisite. Some institutions are moving toward this by requesting narrative CVs (UK Research and Innovation, n.d.), and some researchers are choosing to exclude journal names from their CVs themselves (Barnett 2024).
Progress in this area could be exponential rather than linear. The more institutions that forego journal names and metrics, the more researchers can be confident they will not be useful later in their careers or if they move to other institutions. It will also help these practices become more normalised in research culture.
A more direct measure to curtail unhelpful motivations is that funding mandates behaviours that are beneficial to transparent scholarly communication and refuses to contribute to behaviours that are exploitable for profit. The Bill and Melinda Gates Foundation (2025) policy refresh is one such example, mandating preprints and data accessibility while refusing to contribute toward APCs (Bill and Melinda Gates Foundation 2025).
If the currency of prestige and status symbols offered by journal brands and metrics is no longer usable, then researchers will have little need to seek it. These journals will likely continue to exist and perhaps even still be held in high regard, but importantly researchers will be able to choose if, how, and when they seek to publish in them and when they may choose other means to report their findings, without feeling they have potentially risked or lessened their future careers by not participating in the system.
While there will be much to this question that is not considered here, if these changes were to be widely adopted, the role of scholarly publishing becomes simply facilitating communication, of both the research and the reviews. Amplifying, reviewing and evaluating, but not gatekeeping. A consequence, or necessary component, of journals not being the validators of research would be that they cede some of the power they currently hold. This is perhaps one of the reasons these changes may be hard won. In this world, journal reputation would not be built on the quality of research published, but on the quality, rigour, and transparency of the review and assessment process it offers and on its commitment to principles that advance or accelerate scientific progress. If this system were to flourish, we may see competition evolve based on the quality of review. Some journals may be perceived as light touches and some may be renowned for tougher criticism.
For what is published to matter more than where it is published, we must be prepared for journal brands to mean less than they do today.
Journals could once again become centred around being and serving a community of researchers with shared interests and goals and allow more equitable participation. In this decentralised system, the very idea of a journal may eventually fade altogether.
Today, publishers are all at once the gatekeepers of research, the validators, and the amplifiers. They control the flow of academia’s prime commodity: The publication. They confer status and signals of merit to research and influence who sees it and how. All this leads to an intertwined relationship between research and publishing that has forgotten its purpose and created enormous conflicts of interest in how research publishing operates.
Reforming scholarly communication to prioritise the interests of science over publishing would help leverage available technologies and infrastructure, repurpose existing practices to realise the benefits they were always supposed to bring, and create more accessible and equitable means of participating in scholarly communication. It is a choice, and it is within our reach.
Photo by Matt Benson on Unsplash