Friday, February 24, 2012

False positives in the reporting of experiments

Here's an article suggesting that _lots_ of false positives get introduced into the experimental literature, and they suggest some experimental protocols that if widely adopted by authors and journals might help reduce the number.

False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant
by Joseph P. Simmons, Leif D. Nelson, and Uri Simonsohn
Psychological Science, November 2011, vol. 22, 1359-1366

"First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (<_ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process."
**************



A very nice paper, in a venerable literature. See my earlier attempt, which also focused on more carefully reporting all aspects of how an experiment was conducted and reported.

Roth, A.E., "Lets Keep the Con out of Experimental Econ.: A Methodological NoteEmpirical Economics (Special Issue on Experimental Economics), 1994, 19, 279-289. 


HT: Eyal Ert

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.