In 2010, two prominent economists, Carmen Reinhart and Kenneth Rogoff, published a paper that confirmed what fiscally conservative politicians had long suspected: that once public debt exceeds a certain percentage of GDP, a country’s economic growth slows. The paper caught the attention of Britain’s next Chancellor of the Exchequer, George Osborne, who cited it repeatedly to explain his policy of cutting public services to pay off the national debt, a policy that would become a political strategy in the age of austerity.
There was just one problem with Reinhart and Rogoff’s paper: They accidentally left out five countries from their analysis; they ran the numbers for 15 countries instead of the 20 they thought they’d selected in their spreadsheet. When some lesser-known economists adjusted for this error and a few other irregularities, the most eye-catching part of their results disappeared: the relationship between debt and GDP still existed, but the effects of high debt were more subtle than the dramatic cliff-edge implied in Osborne’s speech.
Scientists, like the rest of us, are not immune to mistakes. “Clearly, mistakes are everywhere, and even a small percentage of these mistakes change the conclusions of a paper,” says Malte Elsson, a professor of research methodology at the University of Bern in Switzerland. The problem is that not many people are looking for them. Reinhart and Rogoff’s mistakes were first discovered in 2013 by an economics student whose professor had tasked his students with replicating the results of a well-known economics paper.
Elson, along with fellow metascience researchers Ruben Aasland and Ian Hussey, have developed a method for systematically finding errors in scientific research. The project is error—It’s modeled on the software industry’s bug bounty programs, in which hackers are paid to find errors in code. In Elson’s project, researchers are paid to comb through papers for possible errors, and are paid a bonus for each verified mistake they find.
The idea came from a discussion between Elson and Aasland, who encourages scientists to find errors in their research, offering a beer for any typos they find (up to three per paper) and 400 euros ($430) for any errors that change the paper’s main conclusions. “We both knew that in our fields there were papers that were completely flawed because of provable errors, but it was very hard to correct the record,” Elson says. All these public errors could cause big problems, Elson reasoned: tens of thousands of dollars could be wasted if the research results that PhDs spent on their degrees turn out to be false.
Hussey, a metascience researcher at the Elson lab in Bern, says error checking is not standard practice in scientific publishing. Nature or SciencePapers are sent to several experts in the field who provide their opinions on whether the paper is of high quality, logically correct, and makes a valuable contribution to the field. However, these reviewers usually do not check for errors, and most of the time they do not have access to the raw data and code needed to root out errors.