6 research outputs found

    Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies

    No full text
    <p>Abstract</p> <p>Background</p> <p>The validity of research synthesis is threatened if published studies comprise a biased selection of all studies that have been conducted. We conducted a meta-analysis to ascertain the strength and consistency of the association between study results and formal publication.</p> <p>Methods</p> <p>The Cochrane Methodology Register Database, MEDLINE and other electronic bibliographic databases were searched (to May 2009) to identify empirical studies that tracked a cohort of studies and reported the odds of formal publication by study results. Reference lists of retrieved articles were also examined for relevant studies. Odds ratios were used to measure the association between formal publication and significant or positive results. Included studies were separated into subgroups according to starting time of follow-up, and results from individual cohort studies within the subgroups were quantitatively pooled.</p> <p>Results</p> <p>We identified 12 cohort studies that followed up research from inception, four that included trials submitted to a regulatory authority, 28 that assessed the fate of studies presented as conference abstracts, and four cohort studies that followed manuscripts submitted to journals. The pooled odds ratio of publication of studies with positive results, compared to those without positive results (publication bias) was 2.78 (95% CI: 2.10 to 3.69) in cohorts that followed from inception, 5.00 (95% CI: 2.01 to 12.45) in trials submitted to regulatory authority, 1.70 (95% CI: 1.44 to 2.02) in abstract cohorts, and 1.06 (95% CI: 0.80 to 1.39) in cohorts of manuscripts.</p> <p>Conclusion</p> <p>Dissemination of research findings is likely to be a biased process. Publication bias appears to occur early, mainly before the presentation of findings at conferences or submission of manuscripts to journals.</p

    Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study

    Get PDF
    Objective To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Design Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Main outcome measure Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. Results The study included 112 independent trial networks (including 1552 trials with 478?775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Conclusions Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence
    corecore