15 research outputs found

    Investigating and dealing with publication bias and other reporting biases in meta-analyses:a review

    Get PDF
    A P value, or the magnitude or direction of results can influence decisions about whether, when, and how research findings are disseminated. Regardless of whether an entire study or a particular study result is unavailable because investigators considered the results to be unfavourable, bias in a meta-analysis may occur when available results differ systematically from missing results. In this paper, we summarize the empirical evidence for various reporting biases that lead to study results being unavailable for inclusion in systematic reviews, with a focus on health research. These biases include publication bias and selective nonreporting bias. We describe processes that systematic reviewers can use to minimize the risk of bias due to missing results in meta-analyses of health research, such as comprehensive searches and prospective approaches to meta-analysis. We also outline methods that have been designed for assessing risk of bias due to missing results in meta-analyses of health research, including using tools to assess selective nonreporting of results, ascertaining qualitative signals that suggest not all studies were identified, and generating funnel plots to identify small-study effects, one cause of which is reporting bias. This article is protected by copyright. All rights reserved

    Assessment of regression-based methods to adjust for publication bias through a comprehensive simulation study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In meta-analysis, the presence of funnel plot asymmetry is attributed to publication or other small-study effects, which causes larger effects to be observed in the smaller studies. This issue potentially mean inappropriate conclusions are drawn from a meta-analysis. If meta-analysis is to be used to inform decision-making, a reliable way to adjust pooled estimates for potential funnel plot asymmetry is required.</p> <p>Methods</p> <p>A comprehensive simulation study is presented to assess the performance of different adjustment methods including the novel application of several regression-based methods (which are commonly applied to detect publication bias rather than adjust for it) and the popular Trim & Fill algorithm. Meta-analyses with binary outcomes, analysed on the log odds ratio scale, were simulated by considering scenarios with and without i) publication bias and; ii) heterogeneity. Publication bias was induced through two underlying mechanisms assuming the probability of publication depends on i) the study effect size; or ii) the p-value.</p> <p>Results</p> <p>The performance of all methods tended to worsen as unexplained heterogeneity increased and the number of studies in the meta-analysis decreased. Applying the methods conditional on an initial test for the presence of funnel plot asymmetry generally provided poorer performance than the unconditional use of the adjustment method. Several of the regression based methods consistently outperformed the Trim & Fill estimators.</p> <p>Conclusion</p> <p>Regression-based adjustments for publication bias and other small study effects are easy to conduct and outperformed more established methods over a wide range of simulation scenarios.</p
    corecore