Article thumbnail

Setting an Optimal α That Minimizes Errors in Null Hypothesis Significance Tests

By Joseph F. Mudge, Leanne F. Baker, Christopher B. Edge and Jeff E. Houlahan


Null hypothesis significance testing has been under attack in recent years, partly owing to the arbitrary nature of setting α (the decision-making threshold and probability of Type I error) at a constant value, usually 0.05. If the goal of null hypothesis testing is to present conclusions in which we have the highest possible confidence, then the only logical decision-making threshold is the value that minimizes the probability (or occasionally, cost) of making errors. Setting α to minimize the combination of Type I and Type II error at a critical effect size can easily be accomplished for traditional statistical tests by calculating the α associated with the minimum average of α and β at the critical effect size. This technique also has the flexibility to incorporate prior probabilities of null and alternate hypotheses and/or relative costs of Type I and Type II errors, if known. Using an optimal α results in stronger scientific inferences because it estimates and minimizes both Type I errors and relevant Type II errors for a test. It also results in greater transparency concerning assumptions about relevant effect size(s) and the relative costs of Type I and II errors. By contrast, the use of α = 0.05 results in arbitrary decisions about what effect sizes will likely be considered significant, if real, and results in arbitrary amounts of Type II error for meaningful potential effect sizes. We cannot identify a rationale for continuing to arbitrarily use α = 0.05 for null hypothesis significance tests in any field, when it is possible to determine an optimal α

Topics: Research Article
Publisher: Public Library of Science
OAI identifier:
Provided by: PubMed Central

Suggested articles


  1. (2002). A history of effect size indices.
  2. (1992). A power primer.
  3. (2009). A review of potential methods of determining critical effect size for designing environmental monitoring programs.
  4. (1996). Abuse of hypothesis testing statistics in ecological risk assessment.
  5. (1995). Detection and decision making in environmental effects monitoring.
  6. (2007). Effect size, confidence interval and statistical significance: a practical guide for biologists.
  7. (1977). Laplace and the indifference principle: Essai philosophique des probabilite ´s. Rend Sem Mat Univ Politec Torino
  8. (2004). Minimizing the cost of environmental management decisions by optimizing statistical thresholds.
  9. (2000). Null hypothesis testing: Problems, prevalence, and an alternative.
  10. (2003). PGC-1 a-responsive genes involved in oxidative phosphorylation are coordinately downregulated inhuman diabetes.
  11. (2004). Reversing introduced species effects: Experimental removal of introduced fish leads to rapid recovery of a declining frog.
  12. (1991). Risk and rationality: Philosophical foundations for populist reforms.
  13. (1997). Statistical power analysis in wildlife research.
  14. (1993). The case against statistical significance testing, revisited.
  15. (1986). The effect of sample size on the meaning of significance tests.
  16. (1999). The insignificance of null hypothesis significance testing.
  17. (1999). The insignificance of statistical significance testing.
  18. (2007). The R Book.

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.