272 research outputs found

    Association of trial registration with the results and conclusions of published trials of new oncology drugs

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Registration of clinical trials has been introduced largely to reduce bias toward statistically significant results in the trial literature. Doubts remain about whether advance registration alone is an adequate measure to reduce selective publication, selective outcome reporting, and biased design. One of the first areas of medicine in which registration was widely adopted was oncology, although the bulk of registered oncology trials remain unpublished. The net influence of registration on the literature remains untested. This study compares the prevalence of favorable results and conclusions among published reports of registered and unregistered randomized controlled trials of new oncology drugs.</p> <p>Methods</p> <p>We conducted a cross-sectional study of published original research articles reporting clinical trials evaluating the efficacy of drugs newly approved for antimalignancy indications by the United States Food and Drug Administration (FDA) from 2000 through 2005. Drugs receiving first-time approval for indications in oncology were identified using the FDA web site and Thomson Centerwatch. Relevant trial reports were identified using PubMed and the Cochrane Library. Evidence of advance trial registration was obtained by a search of clinicaltrials.gov, WHO, ISRCTN, NCI-PDQ trial databases and corporate trial registries, as well as articles themselves. Data on blinding, results for primary outcomes, and author conclusions were extracted independently by two coders. Univariate and multivariate logistic regression identified associations between favorable results and conclusions and independent variables including advance registration, study design characteristics, and industry sponsorship.</p> <p>Results</p> <p>Of 137 original research reports from 115 distinct randomized trials assessing 25 newly approved drugs for treating cancer, the 54 publications describing data from trials registered prior to publication were as likely to report statistically significant efficacy results and reach conclusions favoring the test drug (for results, OR = 1.77; 95% CI = 0.87 to 3.61) as reports of trials not registered in advance. In multivariate analysis, reports of prior registered trials were again as likely to favor the test drug (OR = 1.29; 95% CI = 0.54 to 3.08); large sample sizes and surrogate outcome measures were statistically significant predictors of favorable efficacy results at p < 0.05. Subgroup analysis of the main reports from each trial (n = 115) similarly indicated that registered trials were as likely to report results favoring the test drug as trials not registered in advance (OR = 1.11; 95% CI = 0.44 to 2.80), and also that large trials and trials with nonstringent blinding were significantly more likely to report results favoring the test drug.</p> <p>Conclusions</p> <p>Trial registration alone, without a requirement for full reporting of research results, does not appear to reduce a bias toward results and conclusions favoring new drugs in the clinical trials literature. Our findings support the inclusion of full results reporting in trial registers, as well as protocols to allow assessment of whether results have been completely reported.</p

    The challenges faced in the design, conduct and analysis of surgical randomised controlled trials

    Get PDF
    Randomised evaluations of surgical interventions are rare; some interventions have been widely adopted without rigorous evaluation. Unlike other medical areas, the randomised controlled trial (RCT) design has not become the default study design for the evaluation of surgical interventions. Surgical trials are difficult to successfully undertake and pose particular practical and methodological challenges. However, RCTs have played a role in the assessment of surgical innovations and there is scope and need for greater use. This article will consider the design, conduct and analysis of an RCT of a surgical intervention. The issues will be reviewed under three headings: the timing of the evaluation, defining the research question and trial design issues. Recommendations on the conduct of future surgical RCTs are made. Collaboration between research and surgical communities is needed to address the distinct issues raised by the assessmentof surgical interventions and enable the conduct of appropriate and well-designed trials.The Health Services Research Unit is funded by the Scottish Government Health DirectoratesPeer reviewedPublisher PD

    What are the main inefficiencies in trial conduct : a survey of UKCRC registered clinical trials units in the UK

    Get PDF
    BACKGROUND: The UK Clinical Research Collaboration (UKCRC) registered Clinical Trials Units (CTUs) Network aims to support high-quality, efficient and sustainable clinical trials research in the UK. To better understand the challenges in efficient trial conduct, and to help prioritise tackling these challenges, we surveyed CTU staff. The aim was to identify important inefficiencies during two key stages of the trial conduct life cycle: (i) from grant award to first participant, (ii) from first participant to reporting of final results. METHODS: Respondents were asked to list their top three inefficiencies from grant award to recruitment of the first participant, and from recruitment of the first participant to publication of results. Free text space allowed respondents to explain why they thought these were important. The survey was constructed using SurveyMonkey and circulated to the 45 registered CTUs in May 2013. Respondents were asked to name their unit and job title, but were otherwise anonymous. Free-text responses were coded into broad categories. RESULTS: There were 43 respondents from 25 CTUs. The top inefficiency between grant award and recruitment of first participant was reported as obtaining research and development (R&D) approvals by 23 respondents (53%), contracts by 22 (51%), and other approvals by 13 (30%). The top inefficiency from recruitment of first participant to publication of results was failure to meet recruitment targets, reported by 19 (44%) respondents. A common comment was that this reflected overoptimistic or inaccurate estimates of recruitment at site. Data management, including case report form design and delays in resolving data queries with sites, was reported as an important inefficiency by 11 (26%) respondents, and preparation and submission for publication by 9 (21%). CONCLUSIONS: Recommendations for improving the efficiency of trial conduct within the CTUs network include: further reducing unnecessary bureaucracy in approvals and contracting; improving training for site staff; realistic recruitment targets and appropriate feasibility; developing training across the network; improving the working relationships between chief investigators and units; encouraging funders to release sufficient funding to allow prompt recruitment of trial staff; and encouraging more research into how to improve the efficiency and quality of trial conduct

    Effects of the search technique on the measurement of the change in quality of randomized controlled trials over time in the field of brain injury

    Get PDF
    BACKGROUND: To determine if the search technique that is used to sample randomized controlled trial (RCT) manuscripts from a field of medical science can influence the measurement of the change in quality over time in that field. METHODS: RCT manuscripts in the field of brain injury were identified using two readily-available search techniques: (1) a PubMed MEDLINE search, and (2) the Cochrane Injuries Group (CIG) trials registry. Seven criteria of quality were assessed in each manuscript and related to the year-of-publication of the RCT manuscripts by regression analysis. RESULTS: No change in the frequency of reporting of any individual quality criterion was found in the sample of RCT manuscripts identified by the PubMed MEDLINE search. In the RCT manuscripts of the CIG trials registry, three of the seven criteria showed significant or near-significant increases over time. CONCLUSIONS: We demonstrated that measuring the change in quality over time of a sample of RCT manuscripts from the field of brain injury can be greatly affected by the search technique. This poorly recognized factor may make measurements of the change in RCT quality over time within a given field of medical science unreliable

    No short-cut in assessing trial quality: a case study

    Get PDF
    Assessing the quality of included trials is a central part of a systematic review. Many check-list type of instruments for doing this exist. Using a trial of antibiotic treatment for acute otitis media, Burke et al., BMJ, 1991, as the case study, this paper illustrates some limitations of the check-list approach to trial quality assessment. The general verdict from the check list type evaluations in nine relevant systematic reviews was that Burke et al. (1991) is a good quality trial. All relevant meta-analyses extensively used its data to formulate therapeutic evidence. My comprehensive evaluation, on the other hand, brought to the surface a series of serious problems in the design, conduct, analysis and report of this trial that were missed by the earlier evaluations. A check-list or instrument based approach, if used as a short-cut, may at times rate deeply flawed trials as good quality trials. Check lists are crucial but they need to be augmented with an in-depth review, and where possible, a scrutiny of the protocol, trial records, and original data. The extent and severity of the problems I uncovered for this particular trial warrant an independent audit before it is included in a systematic review

    Likely country of origin in publications on randomised controlled trials and controlled clinical trials during the last 60 years

    Get PDF
    BACKGROUND: The number of publications on clinical trials is unknown as well as the countries publishing most trial reports. To try to examine these questions we performed an ecological study. METHODS: We searched the 454,449 records on publications in The Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, Issue 3, 2005 (CD-ROM version) for possible country of origin. We inspected a random sample of 906 records for information on country and type of trial. RESULTS: There was an exponential growth of publications on randomised controlled trials and controlled clinical trials since 1946, but the growth seems to have seized since 2000. We identified the possible country of origin of 210,974 publications (46.4%). The USA is leading with about 46,789 publications followed by UK, Germany, Italy, the Netherlands, Canada, and France. Sweden becomes the leader with 891 publications per million inhabitants during the last 60 years followed by Denmark (n = 864), New Zealand (n = 791), Finland (n = 781), the Netherlands (n = 570), Switzerland (n = 547), and Norway (n = 543). In depth assessment of the random sample backed these findings. CONCLUSION: Many records lacked country of origin, even after the additional scrutiny. The number of publications on clinical trials increased exponentially until the turn of the century. Rather small, democratic, and wealthy countries take the lead when the number of publications on clinical trials is calculated based on million inhabitants. If all countries produced the same number of trials as these countries, this could mean thousands of new effective treatments during the next 60 years

    Deficiencies in the transfer and availability of clinical trials evidence: A review of existing systems and standards

    Get PDF
    Background: Decisions concerning drug safety and efficacy are generally based on pivotal evidence provided by clinical trials. Unfortunately, finding the relevant clinical trials is difficult and their results are only available in text-based reports. Systematic reviews aim to provide a comprehensive overview of the evidence in a specific area, but may not provide the data required for decision making. Methods: We review and analyze the existing information systems and standards for aggregate level clinical trials information from the perspective of systematic review and evidence-based decision making. Results: The technology currently used has major shortcomings, which cause deficiencies in the transfer, traceability and availability of clinical trials information. Specifically, data available to decision makers is insufficiently structured, and consequently the decisions cannot be properly traced back to the underlying evidence. Regulatory submission, trial publication, trial registration, and systematic review produce unstructured datasets that are insufficient for supporting evidence-based decision making. Conclusions: The current situation is a hindrance to policy decision makers as it prevents fully transparent decision making and the development of more advanced decision support systems. Addressing the identified deficiencies would enable more efficient, informed, and transparent evidence-based medical decision making

    Impact Factor: outdated artefact or stepping-stone to journal certification?

    Full text link
    A review of Garfield's journal impact factor and its specific implementation as the Thomson Reuters Impact Factor reveals several weaknesses in this commonly-used indicator of journal standing. Key limitations include the mismatch between citing and cited documents, the deceptive display of three decimals that belies the real precision, and the absence of confidence intervals. These are minor issues that are easily amended and should be corrected, but more substantive improvements are needed. There are indications that the scientific community seeks and needs better certification of journal procedures to improve the quality of published science. Comprehensive certification of editorial and review procedures could help ensure adequate procedures to detect duplicate and fraudulent submissions.Comment: 25 pages, 12 figures, 6 table

    The assessment of the quality of reporting of meta-analyses in diagnostic research: a systematic review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Over the last decade there have been a number of guidelines published, aimed at improving the quality of reporting in published studies and reviews. In systematic reviews this may be measured by their compliance with the PRISMA statement. This review aims to evaluate the quality of reporting in published meta-analyses of diagnostic tests, using the PRISMA statement and establish whether there has been a measurable improvement over time.</p> <p>Methods</p> <p>Eight databases were searched for reviews published prior to 31<sup>st </sup>December 2008. Studies were selected if they evaluated a diagnostic test, measured performance, searched two or more databases, stated the search terms and inclusion criteria, and used a statistical method to summarise a test's performance. Data were extracted on the review characteristics and items of the PRISMA statement. To measure the change in the quality of reporting over time, PRISMA items for two periods of equal duration were compared.</p> <p>Results</p> <p>Compliance with the PRISMA statement was generally poor: none of the reviews completely adhered to all 27 checklist items. Of the 236 meta-analyses included following selection: only 2(1%) reported the study protocol; 59(25%) reported the searches used; 76(32%) reported the results of a risk of bias assessment; and 82(35%) reported the abstract as a structured summary. Only 11 studies were published before 2000. Thus, the impact of QUOROM on the quality of reporting was not evaluated. However, the periods 2001-2004 and 2005-2008 (covering 93% of studies) were compared using relative risks (RR). There was an increase in the proportion of reviews reporting on five PRISMA items: eligibility criteria (RR 1.13, 95% CI 1.00 - 1.27); risk of bias across studies (methods) (RR 1.81, 95% CI 1.34 - 2.44); study selection results (RR 1.48, 95% CI 1.05 - 2.09); results of individual studies (RR 1.37, 95% CI 1.09 - 1.72); risk of bias across studies (results) (RR 1.65, 95% CI 1.20 - 2.25).</p> <p>Conclusion</p> <p>Although there has been an improvement in the quality of meta-analyses in diagnostic research, there are still many deficiencies in the reporting which future reviewers need to address if readers are to trust the validity of the reported findings.</p
    • ā€¦
    corecore