181 research outputs found

    Results and Outcome Reporting In ClinicalTrials.gov, What Makes it Happen?

    Get PDF
    At the end of the past century there were multiple concerns regarding lack of transparency in the conduct of clinical trials as well as some ethical and scientific issues affecting the trials' design and reporting. In 2000 ClinicalTrials.gov data repository was developed and deployed to serve public and scientific communities with valid data on clinical trials. Later in order to increase deposited data completeness and transparency of medical research a set of restrains had been imposed making the results deposition compulsory for multiple cases.We investigated efficiency of the results deposition and outcome reporting as well as what factors make positive impact on providing information of interest and what makes it more difficult, whether efficiency depends on what kind of institution was a trial sponsor. Data from the ClinicalTrials.gov repository has been classified based on what kind of institution a trial sponsor was. The odds ratio was calculated for results and outcome reporting by different sponsors' class.As of 01/01/2012 118,602 clinical trials data deposits were made to the depository. They came from 9068 different sources. 35344 (29.8%) of them are assigned as FDA regulated and 25151 (21.2%) as Section 801 controlled substances. Despite multiple regulatory requirements, only about 35% of trials had clinical study results deposited, the maximum 55.56% of trials with the results, was observed for trials completed in 2008.The most positive impact on depositing results, the imposed restrains made for hospitals and clinics. Health care companies showed much higher efficiency than other investigated classes both in higher fraction of trials with results and in providing at least one outcome for their trials. They also more often than others deposit results when it is not strictly required, particularly, in the case of non-interventional studies

    The Database for Aggregate Analysis of ClinicalTrials.gov (AACT) and Subsequent Regrouping by Clinical Specialty

    Get PDF
    BACKGROUND: The ClinicalTrials.gov registry provides information regarding characteristics of past, current, and planned clinical studies to patients, clinicians, and researchers; in addition, registry data are available for bulk download. However, issues related to data structure, nomenclature, and changes in data collection over time present challenges to the aggregate analysis and interpretation of these data in general and to the analysis of trials according to clinical specialty in particular. Improving usability of these data could enhance the utility of ClinicalTrials.gov as a research resource. METHODS/PRINCIPAL RESULTS: The purpose of our project was twofold. First, we sought to extend the usability of ClinicalTrials.gov for research purposes by developing a database for aggregate analysis of ClinicalTrials.gov (AACT) that contains data from the 96,346 clinical trials registered as of September 27, 2010. Second, we developed and validated a methodology for annotating studies by clinical specialty, using a custom taxonomy employing Medical Subject Heading (MeSH) terms applied by an NLM algorithm, as well as MeSH terms and other disease condition terms provided by study sponsors. Clinical specialists reviewed and annotated MeSH and non-MeSH disease condition terms, and an algorithm was created to classify studies into clinical specialties based on both MeSH and non-MeSH annotations. False positives and false negatives were evaluated by comparing algorithmic classification with manual classification for three specialties. CONCLUSIONS/SIGNIFICANCE: The resulting AACT database features study design attributes parsed into discrete fields, integrated metadata, and an integrated MeSH thesaurus, and is available for download as Oracle extracts (.dmp file and text format). This publicly-accessible dataset will facilitate analysis of studies and permit detailed characterization and analysis of the U.S. clinical trials enterprise as a whole. In addition, the methodology we present for creating specialty datasets may facilitate other efforts to analyze studies by specialty groups

    Opinions on registering trial details: a survey of academic researchers

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The World Health Organization (WHO) has established a set of items related to study design and administrative information that should build the minimum set of data in a study register. A more comprehensive data set for registration is currently developed by the Ottawa Group. Since nothing is known about the attitudes of academic researchers towards prospective study registration, we surveyed academic researchers about their opinion regarding the registration of study details proposed by the WHO and the Ottawa Group.</p> <p>Methods</p> <p>This was a web-based survey of academic researchers currently running an investigator-initiated clinical study which is registered with clinicaltrials.gov. In July 2006 we contacted 1299 principal investigators of clinical studies by e-mail explaining the purpose of the survey and a link to access a 52-item questionnaire based on the proposed minimum data set by the Ottawa Group. Two reminder e-mails were sent each two weeks apart. Association between willingness to disclose study details and study phase was assessed using the chi-squared test for trend. To explore the potential influence of non-response bias we used logistic regression to assess associations between factors associated with non-response and the willingness to register study details.</p> <p>Results</p> <p>Overall response was low as only 282/1299 (22%) principal investigators participated in the survey. Disclosing study documents, in particular the study protocol and financial agreements, was found to be most problematic with only 31% of respondents willing to disclose these publicly. Consequently, only 34/282 (12%) agreed to disclose all details proposed by the Ottawa Group. Logistic regression indicated no association between characteristics of non-responders and willingness to disclose details.</p> <p>Conclusion</p> <p>Principal investigators of non-industry sponsored studies are reluctant to disclose all data items proposed by the Ottawa Group. Disclosing the study protocol and financial agreements was found to be most problematic. Future discussions on trial registration should not only focus on industry but also on academic researchers.</p

    Completeness and Changes in Registered Data and Reporting Bias of Randomized Controlled Trials in ICMJE Journals after Trial Registration Policy

    Get PDF
    We assessed the adequacy of randomized controlled trial (RCT) registration, changes to registration data and reporting completeness for articles in ICMJE journals during 2.5 years after registration requirement policy.For a set of 149 reports of 152 RCTs with ClinicalTrials.gov registration number, published from September 2005 to April 2008, we evaluated the completeness of 9 items from WHO 20-item Minimum Data Set relevant for assessing trial quality. We also assessed changes to the registration elements at the Archive site of ClinicalTrials.gov and compared published and registry data.RCTs were mostly registered before 13 September 2005 deadline (nā€Š=ā€Š101, 66.4%); 118 (77.6%) started recruitment before and 31 (20.4%) after registration. At the time of registration, 152 RCTs had a total of 224 missing registry fields, most commonly 'Key secondary outcomes' (44.1% RCTs) and 'Primary outcome' (38.8%). More RCTs with post-registration recruitment had missing Minimum Data Set items than RCTs with pre-registration recruitment: 57/118 (48.3%) vs. 24/31 (77.4%) (Ļ‡(2) (1)ā€Š=ā€Š7.255, Pā€Š=ā€Š0.007). Major changes in the data entries were found for 31 (25.2%) RCTs. The number of RCTs with differences between registered and published data ranged from 21 (13.8%) for Study type to 118 (77.6%) for Target sample size.ICMJE journals published RCTs with proper registration but the registration data were often not adequate, underwent substantial changes in the registry over time and differed in registered and published data. Editors need to establish quality control procedures in the journals so that they continue to contribute to the increased transparency of clinical trials

    Trial Registration for Public Trust: Making the Case for Medical Devices

    Get PDF
    Recently, several pharmaceutical companies have been shown to have withheld negative clinical trial results from the public. These incidents have resulted in a concerted global effort to register all trials at inception, so that all subsequent results can be tracked regardless of whether they are positive or negative. These trial registration policies have been driven in large part by concern about the pharmaceutical sector. The medical device industry is much smaller, and different from the pharmaceutical industry in some fundamental ways. This paper examines the issues surrounding registration of device trials and argues that these differences with pharmaceutical should not exempt device trials from registration

    Registro dos ensaios clĆ­nicos

    Full text link

    Are citations from clinical trials evidence of higher impact research? An analysis of ClinicalTrials.gov

    Get PDF
    An important way in which medical research can translate into improved health outcomes is by motivating or influencing clinical trials that eventually lead to changes in clinical practice. Citations from clinical trials records to academic research may therefore serve as an early warning of the likely future influence of the cited articles. This paper partially assesses this hypothesis by testing whether prior articles referenced in ClinicalTrials.gov records are more highly cited than average for the publishing journal. The results from four high profile general medical journals support the hypothesis, although there may not be a cause-and effect relationship. Nevertheless, it is reasonable for researchers to use citations to their work from clinical trials records as partial evidence of the possible long-term impact of their research

    Deficiencies in the transfer and availability of clinical trials evidence: A review of existing systems and standards

    Get PDF
    Background: Decisions concerning drug safety and efficacy are generally based on pivotal evidence provided by clinical trials. Unfortunately, finding the relevant clinical trials is difficult and their results are only available in text-based reports. Systematic reviews aim to provide a comprehensive overview of the evidence in a specific area, but may not provide the data required for decision making. Methods: We review and analyze the existing information systems and standards for aggregate level clinical trials information from the perspective of systematic review and evidence-based decision making. Results: The technology currently used has major shortcomings, which cause deficiencies in the transfer, traceability and availability of clinical trials information. Specifically, data available to decision makers is insufficiently structured, and consequently the decisions cannot be properly traced back to the underlying evidence. Regulatory submission, trial publication, trial registration, and systematic review produce unstructured datasets that are insufficient for supporting evidence-based decision making. Conclusions: The current situation is a hindrance to policy decision makers as it prevents fully transparent decision making and the development of more advanced decision support systems. Addressing the identified deficiencies would enable more efficient, informed, and transparent evidence-based medical decision making

    Comparative Effectiveness Research: An Empirical Study of Trials Registered in ClinicalTrials.gov

    Get PDF
    Background The $1.1 billion investment in comparative effectiveness research will reshape the evidence-base supporting decisions about treatment effectiveness, safety, and cost. Defining the current prevalence and characteristics of comparative effectiveness (CE) research will enable future assessments of the impact of this program. Methods We conducted an observational study of clinical trials addressing priority research topics defined by the Institute of Medicine and conducted in the US between 2007 and 2010. Trials were identified in ClinicalTrials.gov. Main outcome measures were the prevalence of comparative effectiveness research, nature of comparators selected, funding sources, and impact of these factors on results. Results 231 (22.3%; 95% CI 19.8%ā€“24.9%) studies were CE studies and 804 (77.7%; 95% CI, 75.1%ā€“80.2%) were non-CE studies, with 379 (36.6%; 95% CI, 33.7%ā€“39.6%) employing a placebo control and 425 (41.1%; 95% CI, 38.1%ā€“44.1%) no control. The most common treatments examined in CE studies were drug interventions (37.2%), behavioral interventions (28.6%), and procedures (15.6%). Study findings were favorable for the experimental treatment in 34.8% of CE studies and greater than twice as many (78.6%) non-CE studies (P<0.001). CE studies were more likely to receive government funding (P = 0.003) and less likely to receive industry funding (P = 0.01), with 71.8% of CE studies primarily funded by a noncommercial source. The types of interventions studied differed based on funding source, with 95.4% of industry trials studying a drug or device. In addition, industry-funded CE studies were associated with the fewest pediatric subjects (P<0.001), the largest anticipated sample size (P<0.001), and the shortest study duration (P<0.001). Conclusions In this sample of studies examining high priority areas for CE research, less than a quarter are CE studies and the majority is supported by government and nonprofits. The low prevalence of CE research exists across CE studies with a broad array of interventions and characteristics.National Library of Medicine (U.S.) (5G08LM009778)National Institutes of Health (U.S.
    • ā€¦
    corecore