Best Practices for Constructing Confidence Intervals for the General Linear Model Under Non-Normality

Abstract

Given the current climate surrounding the replication crisis facing scientific research, a subsequent call for methodological reform has been issued which explicates the need for a shift from null hypothesis significance testing to reporting of effect sizes and their confidence intervals (CI). However, little is known about the relative performance of CIs constructed following the application of techniques which accommodate for non-normality under the general linear model (GLM). We review these techniques of normalizing data transformations, percentile bootstrapping, bias-corrected and accelerated bootstrapping, and present results from a Monte Carlo simulation designed to evaluate CI performance based on these techniques. The effects of sample size, degree of association among predictors, number of predictors, and different non-normal error distributions were examined. Based on the performance of CIs in terms of coverage, accuracy, and efficiency, general recommendations are made regarding best practice about constructing CIs for the GLM under non-normality

    Similar works