An amusing thing about Friedmanite positivism is its remarkable naïveté about how empirical social science research is actually done. Quantitative empirical analysis in practice is nothing at all like the model described in Friedman (1953). Results are rarely conclusive. Disagreement (and fighting) is widespread. Some theories are so widely believed that no amount of empirical evidence will dislodge them. A renewed emphasis on identification and causal inference during the last few years has made things more interesting and some basic mistakes are being avoided. But Mises’s strictures still apply.
Lately there is a lot of discussion about academic dishonesty — fake data, mistaken (and even fraudulent) statistical inference, and more. A recent paper by Joseph Simmons, Leif Nelson, and Uri Simonsohn (via Paul Nightingale) discusses this problem in the context of applied psychology (as used in particular in management and marketing research). They emphasize the role of researcher judgment: choosing what data to collect, what tests to run, what results to report, and so on. They argue that common practices give researchers so much flexibility they can make almost any result look reasonable:
In this article, we accomplish two things. First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.