8 research outputs found

    Are Per-Family Type I Error Rates Relevant in Social and Behavioral Science?

    Get PDF
    The familywise Type I error rate is a familiar concept in hypothesis testing, whereas the per‑family Type I error rate is rarely addressed. This article uses Monte Carlo simulations and graphics to make a case for the relevance of the per‑family Type I error rate in research practice and pedagogy

    Misguided Opposition to Multiplicity Adjustment Remains a Problem

    Get PDF
    Fallacious arguments against multiplicity adjustment have been cited with increasing frequency to defend unadjusted tests. These arguments and their enduring impact are discussed in this paper

    Errors in a Program for Approximating Confidence Intervals

    Get PDF
    An SPSS script previously presented in this journal contained nontrivial flaws. The script should not be used as written. A call is renewed for validation of new software

    Planned Hypothesis Tests Are Not Necessarily Exempt From Multiplicity Adjustment

    Get PDF
    Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are inherently unnecessary if the tests were “planned” (i.e., if the hypotheses were specified before the study began). This longstanding misconception continues to be perpetuated in textbooks and continues to be cited in journal articles to justify disregard for Type I error inflation. I critically evaluate this myth and examine its rationales and variations. To emphasize the myth’s prevalence and relevance in current research practice, I provide examples from popular textbooks and from recent literature. I also make recommendations for improving research practice and pedagogy regarding this problem and regarding multiple testing in general

    Some clarifications regarding power and Type I error control for pairwise comparisons of three groups

    Get PDF
    A previous study in this journal used Monte Carlo simulations to compare the power and familywise Type I error rates of ten multiple-testing procedures in the context of pairwise comparisons in balanced three-group designs. The authors concluded that the Benjamini–Hochberg procedure was the "best."' However, they did not compare the Benjamini–Hochberg procedure to commonly used multiple-testing procedures that were developed specifically for pairwise comparisons, such as Fisher's protected least significant difference and Tukey's honest significant difference. Simulations in the present study show that in the three-group case, Fisher's method is more powerful than both Tukey's method and the Benjamini–Hochberg procedure. Compared to the Benjamini–Hochberg procedure, Tukey's method is shown to be less powerful in terms of per-pair power (average probability of significance across the tests of false null hypotheses), but more powerful in terms of any-pair power (probability of significance in at least one test of a false null hypothesis). Additionally, the present study shows that small deviations from normality in the population distributions have little effect on the power of pairwise comparisons, and that the previous study's finding to the contrary was based on a methodological inconsistency
    corecore