Leslie K. John is keenly aware of the pressure researchers feel to get results. When her graduate studies in behavioral decision research didn't produce significant findings that led to publication in a prestigious journal, John felt disheartened.
"The incentive structure is such that you're strongly rewarded for a positive result," says John, now an assistant professor of marketing at Harvard Business School.
This system can drive researchers to bend the rules to get a desirable outcome. Sometimes researchers commit minor infractions just to simplify the process, but even tweaks that don't affect the results can cast a shadow on the credibility of academic research.
In research to be published in a forthcoming issue of Psychological Science, John and coauthors George Loewenstein (Carnegie Mellon) and Drazen Prelec (MIT) write that although cases of clear scientific misconduct have received significant media attention recently, "less flagrant transgressions of research norms may be more prevalent and, in the long run, more damaging to the academic enterprise."
In an attempt to get researchers to honestly report questionable practices they have engaged in, John and her coauthors surveyed more than 2,000 research psychologists at major US universities and told them that the more truthful they were about their transgressions, the more money would be donated to a charity of their choice.
The 10 questionable practices in the study ranged from misdemeanors, as John calls them, such as selectively reporting studies that achieved positive results, to "academic felonies" such as falsifying data.
Measuring Truthfulness
The participants' scores were determined by a truth-telling algorithm developed by coauthor Prelec, known as the Bayesian Truth Serum, that compared their own admission rates to their estimates of how likely they thought other researchers were to engage in and admit to the same practices. Higher scores were given to the admissions that were surprisingly common, because, according to the serum's logic, honest answers have the best chance of producing surprisingly common results. The higher the score, the larger the donation to charity.
John says that ideally, participants' answers would have been tested against evidence showing that they had actually performed, or not performed, the acts they were questioned about. But since that wasn't possible, John and her colleagues used the enhanced charitable-giving incentive to encourage honest responses.
The enhanced charity incentive increased the rates at which participating psychologists admitted to engaging in questionable practices themselves, but it didn't change their estimates of prevalence and admission rates among other psychologists. This led the study authors to conclude that the truth-telling incentive was the most effective for the toughest questions—in this case, those that required people to admit to their own wrongdoing.
In all, the truth-telling incentive ended up generating $4,200 for four charities; the receipts were posted online afterward for the participants to see. Almost every respondent admitted to having engaged in at least one of these practices, but it's important to note that some of the practices, such as failing to report all of a study's dependent measures, can be fairly innocuous.
For instance, because it can be expensive to gather a representative sample from the American population, a researcher might elect to conduct one large-scale survey that includes three different dependent measures, each intended to answer a separate research question. The researcher could defensibly write three different papers based on the three different research questions, and fail to report all the dependent measures in each paper—instead just reporting measures that were relevant to the given research question in the given paper.
On the other end of the spectrum, researchers can obtain false positives by running statistical tests over and over until they find the result they are looking for, or by excluding data after looking at how doing so might affect the results.
The truth-telling algorithm had the biggest impact on practices deemed by the respondents to be less defensible. The study estimated that approximately one in 10 research psychologists had engaged in the most serious of the questionable research practices, using false data. "That's very scary to me," John says.
Falsifying data has been a hot topic in the research community lately. Last year, a Boston University cancer researcher was found to have fabricated data in two published papers that were later retracted. Also in 2011, a well-known Harvard University psychology professor was found guilty of scientific misconduct, most likely including fabricating data.
Respondents who admitted to a questionable research practice tended to have a rationalization for doing so, but 35 percent said they doubted the integrity of their own research on at least one occasion—a statistic John finds "disheartening."
This doubt could be because of gray areas that exist in scientific discoveries, John says. These gray areas are necessary for scientific experimentation because if researchers didn't have the freedom to deviate from the norm and try different ways of doing things, they might not make important discoveries.
One way to help keep research methods in check, John says, would be to create a repository where each study is registered, as in clinical trials, and all the measures recorded in order to be compared against the results.
Problematic Practices
"Measuring the Prevalence of Questionable Research Practices with Incentives for Truth-Telling" is thought to be the first study to show that using truth-telling algorithms in combination with truth-telling incentives can lead to higher—and probably more valid—estimates of how often researchers engage in the most problematic practices.
Although used to study researchers, John suggests that this methodology could help business practitioners learn about undesirable practices inside their companies. "If you were trying to find out the prevalence of employee theft, or any type of unsavory behavior, and if you preferred to do this by asking people rather than resorting to something like surveillance, this research suggests that you'll get more valid prevalence estimates if you use this method of incentivizing people to tell the truth."
Combined with computer-assisted self-interviewing, which has been found to increase self-reporting, the method could prove even more effective, she adds.
Even when people are given incentives to tell the truth, though, John is surprised at their willingness to admit to dubious acts. "I am constantly surprised at research participants' willingness to pour their hearts out, even for negative, unflattering information," she says.
John holds a PhD in behavioral decision theory from Carnegie Mellon University, where she also earned an MSc in psychology and behavioral decision research.