Progress in science, and confidence in the scientific process, depends on the accuracy and reliability of the scientific literature. This in turn depends on the rigour of the manuscript review process. In addition to ensuring the quality of scientific publications, independent peer review is also a critical part of the evaluation process for both individual scientists and research institutes.
The ICSU Committee for Freedom and Responsibility in the conduct of Science (CFRS) is concerned that some of the policies and practices currently being adopted by scientific institutions and journal publishers may inadvertently be undermining the integrity of the scientific literature. This is compounded by the uncritical use of publication metrics, as a replacement for independent peer-review, in evaluating scientific performance. By pointing out these concerns to ICSU Member organizations, it is hoped that they will take action to ensure the quality of the scientific record and promote a cautious and critical approach to the use of publication metrics in research assessment.
Issues of concern:
- In making career appointments and awarding grants, the publication record of applicants, and in particular the number and supposed impact of publications, is often a main criterion. This inadvertently creates incentives for duplicate publications, split publications, publications of little value, and honorary authorship.
- Journal impact factors give an indication of the average number of citations to be expected for a paper in a particular journal, but these numbers are easily misinterpreted. For example, they can be influenced by the ratio of reviews to primary papers. Citations for any individual paper may be much higher or lower than that expected from the impact factor.
- While the number of citations can give an indication of the quality of a publication, this is not necessarily so. Truly novel and very important papers may well attract little attention for several years after their publication. A paper that is incorrect can generate multiple citations from papers that rebut it, and citation numbers can be inflated by self-citation. Citation numbers also tend to be greater for reviews than for primary papers.
- The value attributed to publication and citation records can pressure scientists to publish too early, or exaggerate and over-interpret results. It can also, in extreme situations, promote scientific misconduct, in which results are fabricated, falsified or plagiarised.
- As a result of the pressure to publish, a growing number of manuscripts are submitted to multiple journals before they are ultimately accepted. This increases the burden on reviewers, which ultimately diminishes the thoroughness of the review process. Multiple submissions and reviews of the same work can delay the communication of important results and it can be argued that multiplying the number of reviews rarely prevents the eventual publication of poor quality work.
- Alternative means of publication in un-reviewed or non-expert reviewed electronic archives, on individual or institutional home pages or blogs, are becoming increasingly popular. While these media open-up new possibilities and enhance general accessibility to publications, they also increase the availability of less significant or possibly misleading information.
CFRS is concerned that current policies and practices may be having serious effects on the quality of scientific work in general, and increasing the burden on journal reviewers. Any unnecessary increase in the volume of scientific publications threatens a proper reviewing process, which is essential for maintaining standards of accuracy and integrity.
In addition to its role in scientific publishing, the Committee regards rigorous and unbiased peer review as being the most important mechanism for judging the quality of scientific work and scientific projects. Establishing and maintaining good peer review processes is in itself a challenge and it is recognised that there can be benefits in using quantitative measures and metrics as complements to this process. However, the apparent simplicity and attraction of such numerical indices should not conceal their potential for manipulation and misinterpretation and they should therefore only be used with considerable caution.
Because norms for publication number, authorship conventions, and citations differ from field to field, judgements and policy are often best made by peers with expertise in the same area. CFRS urges ICSU’s member organisations to stimulate discussion of scientific evaluation criteria, career indicators and publication records, with the aim of promoting a system that can better serve science in general. Rather than learning to survive in a ‘publish or perish’ culture, young scientists should be encouraged and supported to produce high quality scientific communications that make a real contribution to scientific progress.
Questions for consideration, regarding the use of metrics in assessing research performance, include:
- In judging applications for grants and positions, what is the optimal balance between direct peer reviewing, including reading of relevant articles, and the use of quantitative measures based on publication records?
- In assessing publication records and performance, what weighting should be applied to publication number, h factors, journal impact factor, citation number, primary publication versus review?
- Noting that conventions vary considerably from one field to another, how much credit should be given to first authorship, last authorship, middle authorship, or corresponding authorship? In some fields the prevalence of ‘ghost authorship’ is also an issue of considerable concern*.
- What credit should be given for pre-prints and other electronic publications, whether they are peer reviewed or not? Should impact indices such as clicks, downloads, or links be taken into consideration?
- Should the number of publications that count towards grants or appointments be capped? For example, should only the best three publications per year be taken into consideration? Should scientists even be penalised for authorship on more than, say, 20 publications per year? [Such limits may seem counter-intuitive but would help to promote a culture in which all authors have genuinely contributed to the publications on which their names appear.]
- What weighting should be given to other quantitative measures of research output, such as patent applications, patents granted or patents licensed?
The Committee is aware that discussion of many of these issues is already well underway in some countries and some areas of science, but suggests that the debate is widened to the full international scientific community. By sharing options and strategies, it is hoped that the global science community can benefit from experiences of individual organisations.
*See, for example, Ross et al, JAMA 299, 1800-1812 (2008).
Addendum
Whilst this statement was being developed, the International Mathematical Union released the Citation Statistics report, which is a detailed critical analysis of the use of citation data for scientific evaluation. A main conclusion of this report is: While having a single number to judge quality is indeed simple, it can lead to a shallow under-standing of something as complicated as research. Numbers are not inherently superior to sound judgments.
Citation Statistics report
About this statement
This statement is the responsibility of the Committee on Freedom and Responsibility in the conduct of Science (CFRS) which is a policy committee of the International Council for Science (ICSU). It does not necessarily reflect the views of individual ICSU Member organizations.
Download the Statement