Scientists work to prevent research retractions with code review

Scientists work to prevent research retractions with code review

on Dec 11, 12 • by Chris Bubinas • with No Comments

As computer-driven data analysis has expanded the capabilities of scientific research, the practice has also increased the complexity of peer review for academic journals and raised questions about the quality of scientists' code...

Home » Code Review » Scientists work to prevent research retractions with code review

As computer-driven data analysis has expanded the capabilities of scientific research, the practice has also increased the complexity of peer review for academic journals and raised questions about the quality of scientists’ code. While coding errors have led to questions about research reproducibility and correctness since the 1970s, instances of high-profile retractions continue to occur at a steady rate. A recent iSGTW.org feature highlighted some examples of journal retractions due to software problems and profiled a new initiative to help scientists share their code for review.

“People don’t often think about the importance of code in the [scientific publishing] discussion. I think reproducibility can bring both code and data into that discussion,” Victoria Stodden, an assistant professor of statistics at Columbia University in New York, told iSGTW.

Stodden co-founded a site called RunMyCode that allows researchers to upload their code and data. The site runs the program to verify if the results of a paper’s tables and figures can be replicated. The project does not review or judge the code in any way beyond running it, but it does make the code available for other researchers to test online or download for more extensive peer code review. The goal of such a project is to encourage researchers to share the technology powering their analysis and, in doing so, reduce the likelihood of eventually needing to retract a published study.

The importance of replicability
Coding and data input errors have been responsible for a number of high-profile journal retractions, as well as situations in which scientists were unable to reproduce their original findings. In 2006, for instance, molecular biology researcher Geoffrey Chang retracted the findings of five influential papers on protein structures after discovering that an error in his analysis program had switched two columns of data, Science reported. In August of this year, a paper was retracted from the journal Hypertension due to a coding flaw that doubled the sample size of a data set, according to iSGTW.

“Retractions are coming up fast and furious,” Stodden told iSGTW.

Although journals are interested in whether research findings are replicable, they rarely perform any kind of code review or ask peer reviewers to attempt to rewrite the code backing a piece of research, iSGTW noted. This approach is a mistake, according to Leslie Hatton, the former chair in Forensic Software Engineering at the U.K.’s Kingston University. In a separate iSGTW article, Hatton detailed one experiment his team conducted that spent three years attempting to replicate research with eight different algorithms before determining the original results had been due to a software problem. A January 2012 article Hatton co-authored in Nature made the case that all research that relies on code should make the code available for review.

“Although it is now accepted that data should be made available on request, the current regulations regarding the availability of software are inconsistent,” the authors wrote. “We argue that, with some exceptions, anything less than the release of source programs is intolerable for results that depend on computation.”

Fighting retractions with code review
Hatton’s basic philosophy can be seen in the creation of RunMyCode.org, which provides a basic tool for researchers to open up small amounts of code for review and testing. For larger code bases, it may be necessary to use additional source code analysis techniques.

At CERN’s ATLAS experiment, which runs on around five million lines of code, the team runs a nightly static analysis check, and developers perform a manual test of research results whenever an update is made to the code base, iSGTW explained. As code becomes more central to the scientific research process, scientists may find that they need to further adjust their review practices.

Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Scroll to top