16 research outputs found

    The Use of Routinely Collected Data in Clinical Trial Research

    Get PDF
    RCTs are the gold standard for assessing the effects of medical interventions, but they also pose many challenges, including the often-high costs in conducting them and a potential lack of generalizability of their findings. The recent increase in the availability of so called routinely collected data (RCD) sources has led to great interest in their application to support RCTs in an effort to increase the efficiency of conducting clinical trials. We define all RCTs augmented by RCD in any form as RCD-RCTs. A major subset of RCD-RCTs are performed at the point of care using electronic health records (EHRs) and are referred to as point-of-care research (POC-R). RCD-RCTs offer several advantages over traditional trials regarding patient recruitment and data collection, and beyond. Using highly standardized EHR and registry data allows to assess patient characteristics for trial eligibility and to examine treatment effects through routinely collected endpoints or by linkage to other data sources like mortality registries. Thus, RCD can be used to augment traditional RCTs by providing a sampling framework for patient recruitment and by directly measuring patient relevant outcomes. The result of these efforts is the generation of real-world evidence (RWE). Nevertheless, the utilization of RCD in clinical research brings novel methodological challenges, and issues related to data quality are frequently discussed, which need to be considered for RCD-RCTs. Some of the limitations surrounding RCD use in RCTs relate to data quality, data availability, ethical and informed consent challenges, and lack of endpoint adjudication which may all lead to uncertainties in the validity of their results. The purpose of this thesis is to help fill the aforementioned research gaps in RCD-RCTs, encompassing tasks such as assessing their current application in clinical research and evaluating the methodological and technical challenges in performing them. Furthermore, it aims to assess the reporting quality of published reports on RCD-RCTs

    Machine learning clinical decision support systems for surveillance: a case study on pertussis and RSV in children

    Get PDF
    We tested the performance of a machine learning (ML) algorithm based on signs and symptoms for the diagnosis of RSV infection or pertussis in the first year of age to support clinical decisions and provide timely data for public health surveillance. We used data from a retrospective case series of children in the first year of life investigated for acute respiratory infections in the emergency room from 2015 to 2020. We collected data from PCR laboratory tests for confirming pertussis or RSV infection, clinical symptoms, and routine blood testing results, which were used for the algorithm development. We used a LightGBM model to develop 2 sets of models for predicting pertussis and RSV infection: for each type of infection, we developed one model trained with the combination of clinical symptoms and results from routine blood test (white blood cell count, lymphocyte fraction and C-reactive protein), and one with symptoms only. All analyses were performed using Python 3.7.4 with Shapley values (Shap values) visualization package for predictor visualization. The performance of the models was assessed through confusion matrices. The models were developed on a dataset of 599 children. The recall for the pertussis model combining symptoms and routine laboratory tests was 0.72, and 0.74 with clinical symptoms only. For RSV infection, recall was 0.68 with clinical symptoms and laboratory tests and 0.71 with clinical symptoms only. The F1 score for the pertussis model was 0.72 in both models, and, for RSV infection, it was 0.69 and 0.75. ML models can support the diagnosis and surveillance of infectious diseases such as pertussis or RSV infection in children based on common symptoms and laboratory tests. ML-based clinical decision support systems may be developed in the future in large networks to create accurate tools for clinical support and public health surveillance

    Reporting transparency and completeness in trials : Paper 3 – trials conducted using administrative databases do not adequately report elements related to use of databases

    Get PDF
    Acknowledgments The development of CONSORT-ROUTINE and the present review were funded by grants from the Canadian Institutes of Health Research (PI Thombs, #PJT-156172; PIs Thombs and Kwakkenbos, #PCS-161863) and from the United Kingdom National Institute of Health Research (NIHR) Clinical Trials Unit Support Funding (PI Juszczak, Co-PI Gale, supported salary of SM). The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care. Dr. Langan was supported by a Wellcome Senior Clinical Fellowship in Science (205039/Z/16/Z). Dr. Moher is supported by a University Research Chair (uOttawa). Dr. Gale was supported by the United Kingdom Medical Research Council through a Clinician Scientist Fellowship. Dr. Thombs was supported by a Tier 1 Canada Research Chair.Peer reviewedPublisher PD

    Nonregistration, discontinuation, and nonpublication of randomized trials: A repeated metaresearch analysis

    Full text link
    BACKGROUND We previously found that 25% of 1,017 randomized clinical trials (RCTs) approved between 2000 and 2003 were discontinued prematurely, and 44% remained unpublished at a median of 12 years follow-up. We aimed to assess a decade later (1) whether rates of completion and publication have increased; (2) the extent to which nonpublished RCTs can be identified in trial registries; and (3) the association between reporting quality of protocols and premature discontinuation or nonpublication of RCTs. METHODS AND FINDINGS We included 326 RCT protocols approved in 2012 by research ethics committees in Switzerland, the United Kingdom, Germany, and Canada in this metaresearch study. Pilot, feasibility, and phase 1 studies were excluded. We extracted trial characteristics from each study protocol and systematically searched for corresponding trial registration (if not reported in the protocol) and full text publications until February 2022. For trial registrations, we searched the (i) World Health Organization: International Clinical Trial Registry Platform (ICTRP); (ii) US National Library of Medicine (ClinicalTrials.gov); (iii) European Union Drug Regulating Authorities Clinical Trials Database (EUCTR); (iv) ISRCTN registry; and (v) Google. For full text publications, we searched PubMed, Google Scholar, and Scopus. We recorded whether RCTs were registered, discontinued (including reason for discontinuation), and published. The reporting quality of RCT protocols was assessed with the 33-item SPIRIT checklist. We used multivariable logistic regression to examine the association between the independent variables protocol reporting quality, planned sample size, type of control (placebo versus other), reporting of any recruitment projection, single-center versus multicenter trials, and industry versus investigator sponsoring, with the 2 dependent variables: (1) publication of RCT results; and (2) trial discontinuation due to poor recruitment. Of the 326 included trials, 19 (6%) were unregistered. Ninety-eight trials (30%) were discontinued prematurely, most often due to poor recruitment (37%; 36/98). One in 5 trials (21%; 70/326) remained unpublished at 10 years follow-up, and 21% of unpublished trials (15/70) were unregistered. Twenty-three of 147 investigator-sponsored trials (16%) reported their results in a trial registry in contrast to 150 of 179 industry-sponsored trials (84%). The median proportion of reported SPIRIT items in included RCT protocols was 69% (interquartile range 61% to 77%). We found no variables associated with trial discontinuation; however, lower reporting quality of trial protocols was associated with nonpublication (odds ratio, 0.71 for each 10% increment in the proportion of SPIRIT items met; 95% confidence interval, 0.55 to 0.92; p = 0.009). Study limitations include that the moderate sample size may have limited the ability of our regression models to identify significant associations. CONCLUSIONS We have observed that rates of premature trial discontinuation have not changed in the past decade. Nonpublication of RCTs has declined but remains common; 21% of unpublished trials could not be identified in registries. Only 16% of investigator-sponsored trials reported results in a trial registry. Higher reporting quality of RCT protocols was associated with publication of results. Further efforts from all stakeholders are needed to improve efficiency and transparency of clinical research

    Routinely collected data for randomized trials: promises, barriers, and implications

    Get PDF
    This work was supported by Stiftung Institut fĂŒr klinische Epidemiologie. The Meta-Research Innovation Center at Stanford University is funded by a grant from the Laura and John Arnold Foundation. The funders had no role in design and conduct of the study; the collection, management, analysis, or interpretation of the data; or the preparation, review, or approval of the manuscript or its submission for publication.Peer reviewedPublisher PD

    Nonrandomized studies using causal-modeling may give different answers than RCTs: a meta-epidemiological study

    No full text
    To evaluate how estimated treatment effects agree between nonrandomized studies using causal modeling with marginal structural models (MSM-studies) and randomized trials (RCTs).; Meta-epidemiological study.; MSM-studies providing effect estimates on any healthcare outcome of any treatment were eligible. We systematically sought RCTs on the same clinical question and compared the direction of treatment effects, effect sizes, and confidence intervals.; The main analysis included 19 MSM-studies (1,039,570 patients) and 141 RCTs (120,669 patients). MSM-studies indicated effect estimates in the opposite direction from RCTs for eight clinical questions (42%), and their 95% CI (confidence interval) did not include the RCT estimate in nine clinical questions (47%). The effect estimates deviated 1.58-fold between the study designs (median absolute deviation OR [odds ratio] 1.58; IQR [interquartile range] 1.37 to 2.16). Overall, we found no systematic disagreement regarding benefit or harm but confidence intervals were wide (summary ratio of odds ratios [sROR] 1.04; 95% CI 0.88 to 1.23). The subset of MSM-studies focusing on healthcare decision-making tended to overestimate experimental treatment benefits (sROR 1.44; 95% CI 0.99 to 2.09).; Nonrandomized studies using causal modeling with MSM may give different answers than RCTs. Caution is still required when nonrandomized "real world" evidence is used for healthcare decisions

    Treatment effects in randomised trials using routinely collected data for outcome assessment versus traditional trials: meta-research study

    No full text
    To compare effect estimates of randomised clinical trials that use routinely collected data (RCD-RCT) for outcome ascertainment with traditional trials not using routinely collected data.; Meta-research study.; Studies included in the same meta-analysis in a Cochrane review.; Randomised clinical trials using any type of routinely collected data for outcome ascertainment, including from registries, electronic health records, and administrative databases, that were included in a meta-analysis of a Cochrane review on any clinical question and any health outcome together with traditional trials not using routinely collected data for outcome measurement.; Effect estimates from trials using or not using routinely collected data were summarised in random effects meta-analyses. Agreement of (summary) treatment effect estimates from trials using routinely collected data and those not using such data was expressed as the ratio of odds ratios. Subgroup analyses explored effects in trials based on different types of routinely collected data. Two investigators independently assessed the quality of each data source.; 84 RCD-RCTs and 463 traditional trials on 22 clinical questions were included. Trials using routinely collected data for outcome ascertainment showed 20% less favourable treatment effect estimates than traditional trials (ratio of odds ratios 0.80, 95% confidence interval 0.70 to 0.91, I; 2; =14%). Results were similar across various types of outcomes (mortality outcomes: 0.92, 0.74 to 1.15, I; 2; =12%; non-mortality outcomes: 0.71, 0.60 to 0.84, I; 2; =8%), data sources (electronic health records: 0.81, 0.59 to 1.11, I; 2; =28%; registries: 0.86, 0.75 to 0.99, I; 2; =20%; administrative data: 0.84, 0.72 to 0.99, I; 2; =0%), and data quality (high data quality: 0.82, 0.72 to 0.93, I; 2; =0%).; Randomised clinical trials using routinely collected data for outcome ascertainment show smaller treatment benefits than traditional trials not using routinely collected data. These differences could have implications for healthcare decision making and the application of real world evidence

    Current use and costs of electronic health records for clinical trial research : a descriptive study

    No full text
    Electronic health records (EHRs) may support randomized controlled trials (RCTs). We aimed to describe the current use and costs of EHRs in RCTs, with a focus on recruitment and outcome assessment.; This descriptive study was based on a PubMed search of RCTs published since 2000 that evaluated any medical intervention with the use of EHRs. Cost information was obtained from RCT investigators who used EHR infrastructures for recruitment or outcome measurement but did not explore EHR technology itself.; We identified 189 RCTs, most of which (153 [81.0%]) were carried out in North America and were published recently (median year 2012 [interquartile range 2009-2014]). Seventeen RCTs (9.0%) involving a median of 732 (interquartile range 73-2513) patients explored interventions not related to EHRs, including quality improvement, screening programs, and collaborative care and disease management interventions. In these trials, EHRs were used for recruitment (14 [82%]) and outcome measurement (15 [88%]). Overall, in most of the trials (158 [83.6%]), the outcome (including many of the most patient-relevant clinical outcomes, from unscheduled hospital admission to death) was measured with the use of EHRs. The per-patient cost in the 17 EHR-supported trials varied from US44toUS44 to US2000, and total RCT costs from US67750toUS67 750 to US5 026 000. In the remaining 172 RCTs (91.0%), EHRs were used as a modality of intervention.; Randomized controlled trials are frequently and increasingly conducted with the use of EHRs, but mainly as part of the intervention. In some trials, EHRs were used successfully to support recruitment and outcome assessment. Costs may be reduced once the data infrastructure is established
    corecore