Hola a tod@s,
les traigo un documento de SPARC Europe (Scholarly Publishing and Academic Resources Coalition), explicando la revisión por pares.
En 1997, el entonces editor del British Medical Journal, Richard Smith (citado por Poynder), describió la revisión por pares tradicional como «cara, lenta propensa al sesgo, a abusos, posiblemente anti-innovador, y es incapaz de detectar el fraude.»
Más recientemente, ha habido una serie de casos acerca de las fallos o falta de revisión por pares, y de los «ataques» tanto contra el acceso abierto como a las revistas de suscripción que pretendían exponer sus procesos débiles o inexistentes de revisión por pares.
Nos han dicho que las bibliotecas frecuentemente encuentran que los investigadores están confundidos acerca de la revisión por pares, sobre todo con respecto a las revistas en acceso abierto. Esta confusión se combina a menudo con la creencia de que las revistas de acceso abierto no están revisadas por pares, más aún después del «ataque de Bohannon» de finales de 2013.
Los bibliotecarios pueden ayudar a los investigadores a entender más sobre el control de calidad de las revistas, para que tomen decisiones sopesadas sobre dónde publicar. A continuación se muestra un conjunto de temas de debate que pueden ser útiles en este contexto. (traducción propia del documento en inglés de SPARC Europe by David Ball of David Ball Consulting, March 2014).
A SPARC Europe Talking Points paper
In 1997 the then editor of the British Medical Journal, Richard Smith (quoted by Poynder), described traditional peer review as «expensive, slow, prone to bias, open to abuse, possibly anti-innovatory, and unable to detect fraud.»
More recently, there have been a string of stories about the failures or lack of peer review, and of ‘stings’ against both Open Access and subscription journals that sought to expose their weak or non-existent peer review processes.
Members have told us that libraries commonly find that researchers are confused about peer review, especially in respect of Open Access journals. This confusion is often conflated with the belief that Open Access journals are not peer reviewed, even more so following the ‘Bohannon sting’ of late 2013.
Librarians can help researchers understand more about quality control by journals, so that they make informed decisions about where to publish. Below is a set of talking points that may be useful in this context.
- Peer review was introduced by Oldenburg as a means of rationing scarce column inches in favour of the most important contributions. The scholarly journal has four functions: registration (providing a time-stamp to establish paternity); certification or validation (peer review); awareness (distribution) and archiving (preservation). In the print world the first three are necessarily conflated in publication. In the electronic world they can be decoupled. There are now initiatives that are doing just that (for example, Peerage of Science and Open Scholar).
- Post-publication peer review. Another new approach to peer review involves carrying out the process post-publication, in the form of peer commentary. Fitzpatrick foresees “the development of an open, community‐oriented, post‐publication system of peer‐to‐peer review, transforming peer review from a process focused on gatekeeping to one concerned with filtering the wealth of scholarly material made available via the Internet”.
- Another new form of peer review – lightweight pre-publication review and post-publication community feedback. PLOS One does not consider the importance of a paper’s findings when judging whether to accept it. Instead their practice is to “rigorously peer-review … submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).” In 2012 the rejection rate was over 30%.
- Is peer review consistent? A study in 1982 by Douglas Peters and Stephen Ceci selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and non-blind refereeing practices. With fictitious names and institutions substituted for the original ones, the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.”
- How badly can peer review fail? The case of Diederik Stapel is an example of successive failures of peer review. The scientist was exposed for publishing 55 articles, including one in Science, using fraudulent data. The committee that carried out the review into Stapel’s fraud found it «inconceivable that … reviewers of the international ‘leading journals’ … could have failed to see that [the experiments] would have been almost infeasible in practice, and did not notice the reporting of impossible statistical results.»
- Criticality of failures in peer review. In February 1998, Andrew Wakefield and others published a fraudulent paper in the respected journal The Lancet, linking MMR vaccine to various medical conditions. This was followed by further papers in 2001-2. In 2008, for the first time in 14 years, measles was declared endemic in the UK, meaning that the disease was sustained within the population; this was caused by the preceding decade’s low MMR vaccination rates, themselves the result of the alarm about MMR caused by Wakefield’s articles.
- A ‘sting’ on peer review in subscription-based journals. The publishers Springer and IEEE are removing (March 2014) more than 120 papers from their subscription services after a French researcher discovered that the works were computer-generated nonsense. Over the past two years, computer scientist Cyril Labbé of Joseph Fourier University in Grenoble, France, has catalogued computer-generated papers that made it into more than 30 published conference proceedings between 2008 and 2013. Sixteen appeared in publications by Springer, which is headquartered in Heidelberg, Germany, and more than 100 were published by the Institute of Electrical and Electronic Engineers (IEEE), based in New York. Both publishers were privately informed by Labbé.
Open Access journals: quality and peer review
- A ‘sting’ on Open Access journals. In October 2013, the science journalist John Bohannon submitted a spoof paper about a potential new anti-cancer drug to just over 300 journals. More than half accepted it, including journals from Springer, Elsevier and Sage. The great majority of the Open Access journals were from recently-established and largely unknown publishers. Open Access journals published by the large OA publishers PLoS, Hindawi and BioMed Central rejected the article.
- Quality of OA journals. The Study of Open Access Publishing (SOAP) project in 2010 (a large-scale survey of research-active, published scholars) gave the following views: 89% of respondents believed that journals publishing OA were beneficial to their research field; fewer than 20% felt that OA published poor-quality research, with 50% disagreeing; about 15% felt that OA undermines peer review, with 60% disagreeing.
- Impact of OA journals. A 2012 study of OA journals by Björk and Solomon showed that, for journals launched since 2002, OA journals charging APCs had an impact factor of about 3.2 and toll access journals an impact factor of about 3.3. Newly founded full OA journals compete on almost equal terms with subscription journals founded in the same period.
- Article processing costs (APCs). – Charging APCs is seen by some critics as an incentive to publish as many articles as possible without regard to quality. However, 66% of OA journals listed in the Directory of Open Access Journals in 2013 charged no fee.