Reproducing prominent published research continues to be a challenge, according to new work that found researchers could only replicate 13 of 21 high-profile social science experiments published in the distinguished journals Nature and Science.
The new results were published today in the British journal Nature Human Behaviour.
University of Virginia psychologist Brian Nosek, co-founder and executive director of the Charlottesville-based Center for Open Science, led a collaboration with colleagues at four other labs who sought to replicate the experiments published between 2010 and 2015.
“We decided to try to replicate findings from Science and Nature because they are the most prestigious outlets and they are highly attention-getting papers,” Nosek said. Ten of 17 research projects from Science reproduced successfully. Three of four of Nature’s publications did.
“If you expect that every result that’s published in the literature is a reproducible result, then it’s not good, because about 40 percent didn’t,” he added.
The previous work of Nosek’s team hit the news cycle in a big way in August 2015, when a massive study found independent researchers could replicate less than half of the original findings of 100 experiments published in three prominent psychology journals.
Nosek said the new work shows more promise. “The improvement could be random chance. It could also be partly due to improvements in our methodology of conducting the replication,” he said, adding that they more rigorously sought feedback from the studies’ original authors.
Another finding: The researchers observed that the replications’ “effect sizes” were about half those of the original studies.
Effect size is the difference between two experimental conditions. For example, if testing whether aspirin reduces headaches, one group of people would be administered the drug and the other would get a placebo. Effect size is the difference in reported headaches between the two conditions.
Nosek said the team saw a similar pattern in the 2015 study. “This suggests there may be real effects in the literature that are just exaggerated,” he said.
Replication is the last step of the scientific method, which starts with making an observation, conducting research, forming a hypothesis, testing it, recording data and drawing a conclusion.
There is a second layer to this newly published work: Nosek and his team conducted “prediction markets,” inviting nearly 400 researchers to make bets on which studies could be reproduced and which could not, even before the replication work began.
The prediction markets correctly predicted the replication outcomes for 18 of the 21 studies tested.
Nosek says this is valuable information.
“If researchers can predict which studies are going to replicate versus not, it suggests that they have information, they know things about the likelihood of particular results being reproducible or trustworthy,” he said.
Secondly, if prediction markets are effective, they become an efficient way to help decide where to make efforts in reproducibility. Since funding replications of all research is prohibitively expensive, why not instead run a predication market first and then decide what to try to replicate?
“That might be where I invest my limited research dollars, because if those findings aren’t reproducible, then I don’t want to keep heaping money into the applications of that finding,” Nosek said. “I want to establish whether that effect is there, to then justify further investment.
“It could really improve the efficiency of establishing credibility and how we allocate resources to advance knowledge,” he said.
Source
No comments:
Post a Comment