2,653 research outputs found

    Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    Get PDF
    Objective To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials

    Clinical trial metadata:Defining and extracting metadata on the design, conduct, results and costs of 125 randomised clinical trials funded by the National Institute for Health Research Health Technology Assessment programme

    Get PDF
    Background:  By 2011, the Health Technology Assessment (HTA) programme had published the results of over 100 trials with another 220 in progress. The aim of the project was to develop and pilot ‘metadata’ on clinical trials funded by the HTA programme.   Objectives: The aim of the project was to develop and pilot questions describing clinical trials funded by the HTA programme in terms of it meeting the needs of the NHS with scientifically robust studies. The objectives were to develop relevant classification systems and definitions for use in answering relevant questions and to assess their utility.   Data sources: Published monographs and internal HTA documents.   Review methods: A database was developed, ‘populated’ using retrospective data and used to answer questions under six prespecified themes. Questions were screened for feasibility in terms of data availability and/or ease of extraction. Answers were assessed by the authors in terms of completeness, success of the classification system used and resources required. Each question was scored to be retained, amended or dropped.    Results: One hundred and twenty-five randomised trials were included in the database from 109 monographs. Neither the International Standard Randomised Controlled Trial Number nor the term ‘randomised trial’ in the title proved a reliable way of identifying randomised trials. Only limited data were available on how the trials aimed to meet the needs of the NHS. Most trials were shown to follow their protocols but updates were often necessary as hardly any trials recruited as planned. Details were often lacking on planned statistical analyses, but we did not have access to the relevant statistical plans. Almost all the trials reported on cost-effectiveness, often in terms of both the primary outcome and quality-adjusted life-years. The cost of trials was shown to depend on the number of centres and the duration of the trial. Of the 78 questions explored, 61 were well answered, 33 fully with 28 requiring amendment were the analysis updated. The other 17 could not be answered with readily available data.   Limitations: The study was limited by being confined to 125 randomised trials by one funder.   Conclusions: Metadata on randomised controlled trials can be expanded to include aspects of design, performance, results and costs. The HTA programme should continue and extend the work reported here

    Reporting Of Results In Clinicaltrials.gov And High-Impact Journals: A Cross-Sectional Study

    Get PDF
    In 2007, the FDA Amendments Act expanded requirements for ClinicalTrials.gov, a public clinical trial registry maintained by the U.S. National Library of Medicine, mandating results reporting within 12 months of trial completion for all FDA regulated drugs. We compared clinical trial results reported on ClinicalTrials.gov with corresponding published articles. We conducted a cross-sectional analysis of clinical trials published from July 1, 2010 through June 30, 2011 in high impact journals (impact factor 10 or higher) that were registered and reported results on ClinicalTrials.gov. We compared trial results reported on ClinicalTrials.gov and within published articles for the following: cohort characteristics, trial intervention, primary and secondary efficacy endpoint definition(s) and results, and adverse events. Of 95 included clinical trials registered and reporting results on ClinicalTrials.gov, there were 96 corresponding publications, among which 95 (99%) had at least one discrepancy in reporting of trial details, efficacy results, or adverse events between the two sources. When comparing reporting of primary efficacy endpoints, 132 (85%) were described in both sources, 14 (9%) were described only on ClinicalTrials.gov, and 10 (6%) only within articles. Results for 30 of 132 (23%) primary endpoints could not be compared because of reporting differences between the two sources (e.g., tabular versus graphics); among the remaining 102, reported results were discordant for 21 (21%), altering interpretations for 6 (6%). When comparing reporting of secondary endpoints, 619 (30%) were described in both sources, 421 (20%) were described only on ClinicalTrials.gov, and 1049 (50%) only within articles. Results for 228 of 619 (37%) secondary endpoints could not be compared; among the remaining 391, reported results were discordant for 53 (14%). Among published clinical trials that were registered and reported results on ClinicalTrials.gov, nearly all had at least one discrepancy in reported results, including a fifth among primary endpoints. Our findings question the accuracy of both sources and raise concerns about the usefulness of results reporting to inform clinical practice and future research efforts

    Investigating and dealing with publication bias and other reporting biases in meta-analyses:a review

    Get PDF
    A P value, or the magnitude or direction of results can influence decisions about whether, when, and how research findings are disseminated. Regardless of whether an entire study or a particular study result is unavailable because investigators considered the results to be unfavourable, bias in a meta-analysis may occur when available results differ systematically from missing results. In this paper, we summarize the empirical evidence for various reporting biases that lead to study results being unavailable for inclusion in systematic reviews, with a focus on health research. These biases include publication bias and selective nonreporting bias. We describe processes that systematic reviewers can use to minimize the risk of bias due to missing results in meta-analyses of health research, such as comprehensive searches and prospective approaches to meta-analysis. We also outline methods that have been designed for assessing risk of bias due to missing results in meta-analyses of health research, including using tools to assess selective nonreporting of results, ascertaining qualitative signals that suggest not all studies were identified, and generating funnel plots to identify small-study effects, one cause of which is reporting bias. This article is protected by copyright. All rights reserved

    Agreements between Industry and Academia on Publication Rights: A Retrospective Study of Protocols and Publications of Randomized Clinical Trials.

    Get PDF
    BACKGROUND: Little is known about publication agreements between industry and academic investigators in trial protocols and the consistency of these agreements with corresponding statements in publications. We aimed to investigate (i) the existence and types of publication agreements in trial protocols, (ii) the completeness and consistency of the reporting of these agreements in subsequent publications, and (iii) the frequency of co-authorship by industry employees. METHODS AND FINDINGS: We used a retrospective cohort of randomized clinical trials (RCTs) based on archived protocols approved by six research ethics committees between 13 January 2000 and 25 November 2003. Only RCTs with industry involvement were eligible. We investigated the documentation of publication agreements in RCT protocols and statements in corresponding journal publications. Of 647 eligible RCT protocols, 456 (70.5%) mentioned an agreement regarding publication of results. Of these 456, 393 (86.2%) documented an industry partner's right to disapprove or at least review proposed manuscripts; 39 (8.6%) agreements were without constraints of publication. The remaining 24 (5.3%) protocols referred to separate agreement documents not accessible to us. Of those 432 protocols with an accessible publication agreement, 268 (62.0%) trials were published. Most agreements documented in the protocol were not reported in the subsequent publication (197/268 [73.5%]). Of 71 agreements reported in publications, 52 (73.2%) were concordant with those documented in the protocol. In 14 of 37 (37.8%) publications in which statements suggested unrestricted publication rights, at least one co-author was an industry employee. In 25 protocol-publication pairs, author statements in publications suggested no constraints, but 18 corresponding protocols documented restricting agreements. CONCLUSIONS: Publication agreements constraining academic authors' independence are common. Journal articles seldom report on publication agreements, and, if they do, statements can be discrepant with the trial protocol

    Challenges for funders in monitoring compliance with policies on clinical trials registration and reporting: analysis of funding and registry data in the UK

    Get PDF
    Objectives: To evaluate compliance by researchers with funder requirements on clinical trial transparency, including identifying key areas for improvement; to assess the completeness, accuracy and suitability for annual compliance monitoring of the data routinely collected by a research funding body. / Design: Descriptive analysis of clinical trials funded between February 2011 and January 2017 against funder policy requirements. / Setting: Public medical research funding body in the UK. / Data sources: Relevant clinical trials were identified from grant application details, post-award grant monitoring systems and the International Standard Randomised Controlled Trial Number (ISRCTN) registry. / Main outcome measure: The proportion of all Medical Research Council (MRC)-funded clinical trials that were (a) registered in a clinical trial registry and (b) publicly reported summary results within 2 years of completion. / Results: There were 175 grants awarded that included a clinical trial and all trials were registered in a public trials registry. Of 62 trials completed for over 24 months, 42 (68%) had publicly reported the main findings by 24 months after trial completion; 18 of these achieved this within 12 months of completion. 11 (18%) trials took >24 months to report and 9 (15%) completed trials had not yet reported findings. Five datasets were shared with other researchers. / Conclusions: Compliance with the funder policy requirements on trial registration was excellent. Reporting of the main findings was achieved for most trials within 24 months of completion; however, the number of unreported trials remains a concern and should be a focus for future funder policy initiatives. Identifying trials from grant management and grant monitoring systems was challenging therefore funders should ensure investigators reliably provide trial registries with information and regularly update entries with details of trial publications and protocols

    Keratinocyte growth factor for the treatment of the acute respiratory distress syndrome (KARE): a randomised, double-blind, placebo-controlled phase 2 trial

    Get PDF
    <p>(<b>A</b>) Immunofluorescence signal for dystrophin is significantly reduced in the SSI heart (bottom left panel) compared with the immunofluorescent signal in the SHAM heart (upper left panel), and the SHAM+ALLN (upper right panel) and SSI+ALLN (bottom right panel) myocardium. (<b>B</b>) Protein levels of dystrophin in the SHAM, SSI, SHAM+ALLN and SSI+ALLN hearts were measured 24 h after the CLP procedure and were expressed in arbitrary units (AUs). α-Tubulin was used to determine equivalent loading conditions. The results (n = 6 per group) are representative of three different experiments. Scale bars indicate 50 μm.</p
    corecore