34 research outputs found

    Conditionally random inference in meta-analysis: a Monte Carlo study

    No full text
    The conditional use of the random-effects model as the primary choice of inference is common practice in everyday meta-analysis. This method of using the random-effects model for inference only if the Q statistic test for heterogeneity is significant after having used the fixed-effects approach ignores the most important criterion for choosing the correct inference model: the nature of the question one is investigating. Furthermore, using the wrong inference model could have unfortunate consequences. It is these potential consequences that we investigated via simulation in order to help decide what model is best to use. Power and error rates for heterogeneity testing and model fit testing, and bias and efficiency of parameter estimation were the criteria for comparing procedures. We found that while, overall, fixed-effects inference is more powerful than mixed-effects inference for testing possible components of models, it also produces much higher Type I error rates when heterogeneity is present. Furthermore, when heterogeneity is present, (so that mixed-effects inference is more appropriate), the Q statistic test for heterogeneity is not significant for small variance components and effect-sizes when a Type I error has been committed in the course of model selection using fixed-effects procedures. Thus, using the conditional approach will too often lead us to using the wrong inference model. Furthermore, maximum-likelihood mixed-effects inference is found to work much better. Thus, researchers should choose wisely and weigh their options carefully when choosing an appropriate inference model

    Meta‐Analysis

    Full text link

    A parsimonious weight function for modeling publication bias

    No full text
    Quantitative research literature is often biased because studies that fail to find a significant effect (or that demonstrate effects that are not in the desired or expected direction) are less likely to be published. This phenomenon, termed publication bias, can cause problems when researchers attempt to synthesize results through the set of techniques known as meta-analysis. Various methods exist that estimate and correct meta-analyses for publication bias. However, no single method exists that 1) can account for continuous moderators by including them within the model, 2) allow for substantial data heterogeneity, 3) produce an adjusted mean effect size, 4) include a formal test for publication bias, and 5) allow for correction when only a small number of effects is included in the analysis. This dissertation develops a method that tries to encompass those characteristics. The model uses the beta density as a weight function that estimates the selection process in order to produce adjusted parameter estimates. The model is implemented both by maximum-likelihood (ML) and by Bayesian estimation. The utility of the model is assessed by simulations and through use on real data sets. The ML simulations indicate that the likelihood-ratio test has good Type I error performance. However, the test is not very powerful for small data sets. Coverage rates indicate that the model’s 95% confidence intervals based on adjusted parameter estimates (those that correct for bias) are more likely to contain the true parameter values than are CIs around the unadjusted parameter estimates (those that do not account for bias). Bias and root mean squared errors of the estimates are better for the adjusted mean effect than the unadjusted mean effect whenever bias is present. The ML simulations also show that the model is good at distinguishing systematic study differences from publication bias. The utility of the Bayesian implementation of the model is demonstrated in two ways: 1) when ML estimation produces nonsensical parameter estimates for real data sets, Bayesian estimation does a good job of adjusting to appropriate estimates; and 2) when bias is present in small data sets, the adjusted Bayesian parameter estimates are generally closer to the true population values than the adjusted ML estimates

    Estimating effect size when there is clustering in one treatment group

    Full text link

    How Responsive Is a Teacher’s Classroom Practice to Intervention? A Meta-Analysis of Randomized Field Studies

    Full text link
    While teacher effectiveness has been a particular focus of federal education policy, and districts allocate significant resources toward professional development for teachers, these efforts are guided by an unexplored assumption that classroom practice can be improved through intervention. Yet even assuming classroom practice is responsive, little information is available to inform stakeholder expectations about how much classroom practice may change through intervention, or whether particular aspects of classroom practice are more amenable to improvement. Moreover, a growing body of rigorous research evaluating programs with a focus on improving classroom practice provides a new opportunity to explore factors associated with changes in classroom practice, such as intervention, study sample, or contextual features. This study examines the question of responsiveness by conducting a meta-analysis of randomized experiments of interventions directed at classroom practice. Our empirical findings indicate that multiple dimensions of classroom practice improve meaningfully through classroom practice-directed intervention, on average, but also find substantial heterogeneity in the effects. Implications for practice and research are discussed.</jats:p

    Correction to: Estimating effect size when there is clustering in one treatment group

    Full text link

    A parsimonious weight function for modeling publication bias.

    Full text link

    Estimation and inference for step-function selection models in meta-analysis with dependent effects

    No full text
    Meta-analyses in social science fields face multiple methodological challenges arising from how primary research studies are designed and reported. One challenge is that many primary studies report multiple relevant effect size estimates. Another is selective reporting bias, which arises when the availability of study findings is influenced by the statistical significance of results. Although many selective reporting diagnostics and bias-correction methods have been proposed, few are suitable for meta-analyses involving dependent effect sizes. Among available methods, step-function selection models are conceptually appealing and have shown promise in previous simulations. We study methods for estimating step-function models from data involving dependent effect sizes, focusing specifically on estimating parameters of the marginal distribution of effect sizes and accounting for dependence using cluster-robust variance estimation or bootstrap resampling. We describe two estimation strategies, demonstrate them by re-analyzing data from a synthesis on ego depletion effects, and evaluate their performance through an extensive simulation study under single-step selection. Simulation findings indicate that selection models provide low-bias estimates of average effect size and that clustered bootstrap confidence intervals provide acceptable coverage levels. However, adjusting for selective reporting bias using step-function models involves a bias-variance trade-off, and unadjusted estimates of average effect sizes may be preferable if the strength of selective reporting is mild
    corecore