Estimating the change in meta-analytic effect size estimates after the application of publication bias adjustment methods

Abstract

Publication bias poses a challenge for accurately synthesising research findings using meta-analysis. A number of statistical methods have been developed to combat this problem by adjusting the meta-analytic estimates. Previous studies tended to apply these methods without regard to optimal conditions for each method’s performance. The present study sought to estimate the typical effect size attenuation of these methods when they are applied to real meta-analytic datasets that match the conditions under which each method is known to remain relatively unbiased (such as sample size, level of heterogeneity, population effect size, and the level of publication bias). 433 datasets from 90 papers published in psychology journals were reanalysed using a selection of publication bias adjustment methods. The downward adjustment found in our sample was minimal, with greatest identified attenuation of b = –0.032, 95% Highest Posterior Density interval (HPD) ranging from –0.055 to –0.009, for the Precision Effect Test (PET). Some methods tended to adjust upwards, and this was especially true for datasets with a sample size smaller than ten. We propose that researchers should seek to explore the full range of plausible estimates for the effects they are studying and note that these methods may not be able to combat bias in small samples (with less than ten primary studies). We argue that although the effect size attenuation we found tended to be minimal, this should not be taken as an indication of low levels of publication bias in psychology. We discuss the findings with reference to new developments in Bayesian methods for publication bias adjustment, and the recent methodological reforms in psychology

    Similar works