Article thumbnail
Location of Repository

Why Most Published Research Findings Are False

By John P. A. Ioannidis


There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research

Topics: Essay
Publisher: Public Library of Science
Year: 2005
DOI identifier: 10.1371/journal.pmed.0020124
OAI identifier:
Provided by: PubMed Central

Suggested articles


  1. (1957). A comment on D.V. Lindley’s statistical paradox.
  2. (1992). A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction.
  3. (1957). A statistical paradox.
  4. (2001). Any casualties in the clash of randomised and observational evidence?
  5. (2004). Assessing the probability that a positive report is false: An approach for molecular epidemiology studies.
  6. (2004). Better reporting of harms in randomized trials: An extension of the CONSORT statement.
  7. (2004). Clinical trial registration: A statement from
  8. (2003). Comparison of methods for estimating the number of true null hypotheses in multiplicity testing.
  9. (2005). Contradicted and initially stronger effects in highly cited clinical research.
  10. (2005). Early extreme contradictory estimates may appear in published research: The Proteus phenomenon in molecular genetics research and randomized trials.
  11. (2004). Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles.
  12. (1995). Epidemiology faces its limits.
  13. (2004). Failing the public health—
  14. (2003). Genetic associations: False or true?
  15. (1999). ICH Harmonised Tripartite Guideline. Statistical principles for clinical trials.
  16. (1999). Improving the quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. Quality of Reporting of Meta-analyses.
  17. (2000). Meta-analysis of observational studies in epidemiology: A proposal for reporting.
  18. (2005). Microarrays and molecular research: Noise discovery?
  19. (1999). Molecular classifi cation of cancer: Class discovery and class prediction by gene expression monitoring.
  20. (2005). Prediction of cancer outcome with microarrays: A multiple random validation strategy.
  21. (2003). Predictive ability of DNA microarrays for cancer outcomes and correlates: An empirical assessment.
  22. (2003). Problems of reporting genetic associations with complex outcomes.
  23. (2001). Replication validity of genetic association studies.
  24. (2001). Reporting of confl icts of interest in guidelines of preventive and therapeutic interventions.
  25. (2004). Rules of evidence for cancer molecular-marker discovery and validation.
  26. (1998). Scientifi c journals and their authors’ fi nancial interests: A pilot study.
  27. (2000). Searching for genetic determinants in the new millennium.
  28. (2001). Sifting the evidence—What’s wrong with signifi cance tests.
  29. (2001). The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomised trials.
  30. (2004). Those confounded vitamins: What can we learn from the differences between observational versus randomised trial evidence?
  31. (1994). Transfer of technology from statistical journals to the biomedical literature. Past trends and future predictions.
  32. (2001). Two cheers for P-values.
  33. (2000). Unpublished rating scales: A major source of bias in randomised controlled trials of treatments for schizophrenia.
  34. (2000). What do we mean by validating a prognostic model?
  35. (2004). When are observational studies as credible as randomised trials?
  36. (1984). Why do we need some large, simple randomized trials?

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.