Academic Fraud and the Peer Review Process

10.coverThe so-called “peer review process” is supposed to be the unimpeachable  guarantee that publications in academic journals have been chosen in accordance with the highest standards of  scientific integrity and quality.  The number of papers that an academic publishes in peer-reviewed journals and the number of times his or her articles are cited in other peer-reviewed articles are the main factors determining whether or not he or she  is promoted and awarded tenure.  Recently there occurred a particularly egregious abuse of the process.

The Journal of Vibration and Control (JVC) is  a respected scientific  journal in the highly technical field of acoustics and a part of the reputable SAGE Group of academic publications.   JVC has recently retracted 60 published articles after uncovering the operation of a “peer review ring” among its authors and reviewers (“referees”)  Although is is not exactly clear how the scam worked, it appears to have been run by Peter Chen of the National Pingtung University of Education (NPUE) in Taiwan and probably involved other scientists at NPUE.   As best as can be determined, the ring posted up to 130 fabricated  names and fake email addresses on an online reviewing system called SAGE Track.  These bogus identities were used by the members of the ring to write  favorable reviews of one another’s submissions and send them to Ali H. Nayfeh, the Editor-in-Chief of JVC.   In at least one instance, it is believed, Peter Chen reviewed one of his own papers under an alias.

In May NPUE informed SAGE and JVC that Peter Chen had resigned from its faculty in February.  In the same month JVC announced that Nayfeh had “retired” as editor of the journal.  Nayfeh had initiated investigation of the ring in 2013.  A full report on the incident including the titles of all the retracted articles can be found here.

This  incident should not be surprising, however.  Knowledge that the peer review process is gravely flawed and easily abused is well known.  Richard Smith, the former editor of the respected British Medical Journal (BMJ), the Journal of the Royal Society of Medicine, characterized the “classic” peer review system as follows:

The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin,

But one would think that peer review would at least be useful for detecting fraud and major error.  Not so, says Smith:

Peer review might also be useful for detecting errors or fraud. At the BMJ we did several studies where we inserted major errors into papers that we then sent to many reviewers.  Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter. Peer review sometimes picks up fraud by chance, but generally it is not a reliable method for detecting fraud because it works on trust.

Now if this is the case in a “hard science” like medical research whose experimental results can, at least in principle be checked, imagine the situation in  an  social science like economics where controlled experiments are impossible and most “researchers” have strong ideological predispositions.   Smith concludes that, despite its many defects, the peer review process

is likely to remain central to science and journals because there is no obvious alternative, and scientists and editors have a continuing belief in peer review. How odd that science should be rooted in belief.

Certainly we should rethink the public funding of an institution that depends so heavily on such a defective process for discovering scientific truth.

Comments

  1. I didn’t realize that the “peer review process” was so slack! It opens a whole new level of questions about all sorts of “evidence” being used as justifications for public policies.

  2. I seem to remember an incident where a Wikipedia skeptic inserted some erroneous content into several Wiki articles.
    He expected that few would ever be detected and those that were, would take years to be uncovered.
    The result was that ALL the errors were detected and corrected within like MINUTES!! (exact numbers escape me).

    See where I’m going here?

Leave a Reply