77 research outputs found

    Impact of Reporting Bias in Network Meta-Analysis of Antidepressant Placebo-Controlled Trials

    Get PDF
    BACKGROUND: Indirect comparisons of competing treatments by network meta-analysis (NMA) are increasingly in use. Reporting bias has received little attention in this context. We aimed to assess the impact of such bias in NMAs. METHODS: We used data from 74 FDA-registered placebo-controlled trials of 12 antidepressants and their 51 matching publications. For each dataset, NMA was used to estimate the effect sizes for 66 possible pair-wise comparisons of these drugs, the probabilities of being the best drug and ranking the drugs. To assess the impact of reporting bias, we compared the NMA results for the 51 published trials and those for the 74 FDA-registered trials. To assess how reporting bias affecting only one drug may affect the ranking of all drugs, we performed 12 different NMAs for hypothetical analysis. For each of these NMAs, we used published data for one drug and FDA data for the 11 other drugs. FINDINGS: Pair-wise effect sizes for drugs derived from the NMA of published data and those from the NMA of FDA data differed in absolute value by at least 100% in 30 of 66 pair-wise comparisons (45%). Depending on the dataset used, the top 3 agents differed, in composition and order. When reporting bias hypothetically affected only one drug, the affected drug ranked first in 5 of the 12 NMAs but second (n = 2), fourth (n = 1) or eighth (n = 2) in the NMA of the complete FDA network. CONCLUSIONS: In this particular network, reporting bias biased NMA-based estimates of treatments efficacy and modified ranking. The reporting bias effect in NMAs may differ from that in classical meta-analyses in that reporting bias affecting only one drug may affect the ranking of all drugs

    The use of clinical study reports to enhance the quality of systematic reviews: a survey of systematic review authors

    Get PDF
    Background: Clinical study reports (CSRs) are produced for marketing authorisation applications. They often contain considerably more information about, and data from, clinical trials than corresponding journal publications. Use of data from CSRs might help circumvent reporting bias, but many researchers appear to be unaware of their existence or potential value. Our survey aimed to gain insight into the level of familiarity, understanding and use of CSRs, and to raise awareness of their potential within the systematic review community. We also aimed to explore the potential barriers faced when obtaining and using CSRs in systematic reviews. Methods: Online survey of systematic reviewers who (i) had requested or used CSRs, (ii) had considered but not used CSRs and (iii) had not considered using CSRs was conducted. Cochrane reviewers were contacted twice via the Cochrane monthly digest. Non-Cochrane reviewers were reached via journal and other website postings. Results: One hundred sixty respondents answered an open invitation and completed the questionnaire; 20/ 160 (13%) had previously requested or used CSRs and other regulatory documents, 7/160 (4%) had considered but not used CSRs and 133/160 (83%) had never considered this data source. Survey respondents mainly sought data from the European Medicines Agency (EMA) and/or the Food and Drug Administration (FDA). Motivation for using CSRs stemmed mainly from concerns about reporting bias 11/20 (55%), specifically outcome reporting bias 11/20 (55%) and publication bias 5/20 (25%). The barriers to using CSRs noted by all types of respondents included current limited access to these documents (43 respondents), the time and resources needed to obtain and include these data in evidence syntheses (n = 25) and lack of guidance about how to use these sources in systematic reviews (n = 26). Conclusions: Most respondents (irrespective of whether they had previously used them) agreed that access to CSRs is important, and suggest that further guidance on how to use and include these data would help to promote their use in future systematic reviews. Most respondents who received CSRs considered them to be valuable in their systematic review and/or meta-analysis

    Reporting of Adverse Events in Published and Unpublished Studies of Health Care Interventions : A Systematic Review

    Get PDF
    BACKGROUND: We performed a systematic review to assess whether we can quantify the underreporting of adverse events (AEs) in the published medical literature documenting the results of clinical trials as compared with other nonpublished sources, and whether we can measure the impact this underreporting has on systematic reviews of adverse events. METHODS AND FINDINGS: Studies were identified from 15 databases (including MEDLINE and Embase) and by handsearching, reference checking, internet searches, and contacting experts. The last database searches were conducted in July 2016. There were 28 methodological evaluations that met the inclusion criteria. Of these, 9 studies compared the proportion of trials reporting adverse events by publication status. The median percentage of published documents with adverse events information was 46% compared to 95% in the corresponding unpublished documents. There was a similar pattern with unmatched studies, for which 43% of published studies contained adverse events information compared to 83% of unpublished studies. A total of 11 studies compared the numbers of adverse events in matched published and unpublished documents. The percentage of adverse events that would have been missed had each analysis relied only on the published versions varied between 43% and 100%, with a median of 64%. Within these 11 studies, 24 comparisons of named adverse events such as death, suicide, or respiratory adverse events were undertaken. In 18 of the 24 comparisons, the number of named adverse events was higher in unpublished than published documents. Additionally, 2 other studies demonstrated that there are substantially more types of adverse events reported in matched unpublished than published documents. There were 20 meta-analyses that reported the odds ratios (ORs) and/or risk ratios (RRs) for adverse events with and without unpublished data. Inclusion of unpublished data increased the precision of the pooled estimates (narrower 95% confidence intervals) in 15 of the 20 pooled analyses, but did not markedly change the direction or statistical significance of the risk in most cases. The main limitations of this review are that the included case examples represent only a small number amongst thousands of meta-analyses of harms and that the included studies may suffer from publication bias, whereby substantial differences between published and unpublished data are more likely to be published. CONCLUSIONS: There is strong evidence that much of the information on adverse events remains unpublished and that the number and range of adverse events is higher in unpublished than in published versions of the same study. The inclusion of unpublished data can also reduce the imprecision of pooled effect estimates during meta-analysis of adverse events

    Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries

    Get PDF
    BACKGROUND: The synthesis of published research in systematic reviews is essential when providing evidence to inform clinical and health policy decision-making. However, the validity of systematic reviews is threatened if journal publications represent a biased selection of all studies that have been conducted (dissemination bias). To investigate the extent of dissemination bias we conducted a systematic review that determined the proportion of studies published as peer-reviewed journal articles and investigated factors associated with full publication in cohorts of studies (i) approved by research ethics committees (RECs) or (ii) included in trial registries. METHODS AND FINDINGS: Four bibliographic databases were searched for methodological research projects (MRPs) without limitations for publication year, language or study location. The searches were supplemented by handsearching the references of included MRPs. We estimated the proportion of studies published using prediction intervals (PI) and a random effects meta-analysis. Pooled odds ratios (OR) were used to express associations between study characteristics and journal publication. Seventeen MRPs (23 publications) evaluated cohorts of studies approved by RECs; the proportion of published studies had a PI between 22% and 72% and the weighted pooled proportion when combining estimates would be 46.2% (95% CI 40.2%-52.4%, I2 = 94.4%). Twenty-two MRPs (22 publications) evaluated cohorts of studies included in trial registries; the PI of the proportion published ranged from 13% to 90% and the weighted pooled proportion would be 54.2% (95% CI 42.0%-65.9%, I2 = 98.9%). REC-approved studies with statistically significant results (compared with those without statistically significant results) were more likely to be published (pooled OR 2.8; 95% CI 2.2-3.5). Phase-III trials were also more likely to be published than phase II trials (pooled OR 2.0; 95% CI 1.6-2.5). The probability of publication within two years after study completion ranged from 7% to 30%. CONCLUSIONS: A substantial part of the studies approved by RECs or included in trial registries remains unpublished. Due to the large heterogeneity a prediction of the publication probability for a future study is very uncertain. Non-publication of research is not a random process, e.g., it is associated with the direction of study findings. Our findings suggest that the dissemination of research findings is biased

    Why we need easy access to all data from all clinical trials and how to accomplish it

    Get PDF
    International calls for registering all trials involving humans and for sharing the results, and sometimes also the raw data and the trial protocols, have increased in recent years. Such calls have come, for example, from the Organization for Economic Cooperation and Development (OECD), the World Health Organization (WHO), the US National Institutes of Heath, the US Congress, the European Commission, the European ombudsman, journal editors, The Cochrane Collaboration, and several funders, for example the UK Medical Research Council, the Wellcome Trust, the Bill and Melinda Gates Foundation and the Hewlett Foundation
    corecore