Heterogeneity and Publication Bias in Research on Test-Potentiated New Learning

Abstract

Prior retrieval practice potentiates new learning. A recent meta-analysis of this test-potentiated new learning (TPNL) effect by Chan, Meissner, and Davis (2018) concluded that it is a robust and reliable finding (Hedges’ g = 0.44). Although Chan et al. discussed three different experimental designs that have been employed to study TPNL, we argue that their meta-analysis failed to adequately distinguish the findings from these different designs, acknowledge the significance of the substantial between-study heterogeneity across all pooled effects, and assess the degree of publication bias in the sample. We conducted a new meta-analysis that assessed the designs separately and applied appropriate corrections for publication bias. We found that studies using a standard design yield weak evidence of a TPNL effect, studies using pre-testing yield a small but reliable effect, and studies using interleaving designs yield weak evidence of a negative effect. Compared to Chan et al.’s conclusions, these reanalyses cast TPNL in a very different light and point to a pressing need for preregistered experiments to assess its reproducibility in the absence of publication bias

    Similar works