2,605 research outputs found

    Variation in methods, results and reporting in electronic health record-based studies evaluating routine care in gout: A systematic review

    Get PDF
    Objective: To perform a systematic review examining the variation in methods, results, reporting and risk of bias in electronic health record (EHR)-based studies evaluating management of a common musculoskeletal disease, gout. Methods: Two reviewers systematically searched MEDLINE, Scopus, Web of Science, CINAHL, PubMed, EMBASE and Google Scholar for all EHR-based studies published by February 2019 investigating gout pharmacological treatment. Information was extracted on study design, eligibility criteria, definitions, medication usage, effectiveness and safety data, comprehensiveness of reporting (RECORD), and Cochrane risk of bias (registered PROSPERO CRD42017065195). Results: We screened 5,603 titles/abstracts, 613 full-texts and selected 75 studies including 1.9M gout patients. Gout diagnosis was defined in 26 ways across the studies, most commonly using a single diagnostic code (n = 31, 41.3%). 48.4% did not specify a disease-free period before ‘incident’ diagnosis. Medication use was suboptimal and varied with disease definition while results regarding effectiveness and safety were broadly similar across studies despite variability in inclusion criteria. Comprehensiveness of reporting was variable, ranging from 73% (55/75) appropriately discussing the limitations of EHR data use, to 5% (4/75) reporting on key data cleaning steps. Risk of bias was generally low. Conclusion: The wide variation in case definitions and medication-related analysis among EHR-based studies has implications for reported medication use. This is amplified by variable reporting comprehensiveness and the limited consideration of EHR-relevant biases (e.g. data adequacy) in study assessment tools. We recommend accounting for these biases and performing a sensitivity analysis on case definitions, and suggest changes to assessment tools to foster this

    The feasibility of using of electronic health records to inform clinical decision making for community-onset urinary tract infection in England

    Get PDF
    Urinary tract infections (UTIs) are a major source of morbidity, yet differentiating UTI from other conditions and choosing the right treatment remains challenging. Using case studies from English primary and secondary care, this thesis investigates the potential use of electronic health records (EHR) - i.e., data recorded as part of routine care - to aid the diagnosis and management of community-onset UTI. I start by introducing sources of uncertainty in diagnosing UTI (Chapter 1) and review how EHRs have previously been used to study UTIs (Chapter 2). In Chapter 3, I discuss EHR sources available to study UTIs in England. In Chapter 4, I explore how EHRs from primary care can be used to guide antibiotic prescribing for UTI, by evaluating harms of delaying treatment in key patient groups. In Chapters 5 and 6, I explore the use of EHR data as a diagnostic tool to guide antibiotic de-escalation in patients with suspected UTI in the emergency department (ED). Cases of community-onset UTI could be identified in both primary and secondary care data but case definitions relied heavily on coarse diagnostic codes. A lack of information on patients' acute health status, clinical observations (e.g., urine dipstick tests), and reasons for antibiotic prescribing resulted in heterogeneous study cohorts, which likely confounded estimated effects of antibiotic treatment in primary care. In secondary care, early prediction of bacteriuria to guide antibiotic prescribing decisions in the ED proved promising, but model performance varied greatly by patient mix and variable definitions. Better recording of clinical information and a combination of retrospective EHR analysis with prospective cohorts and qualitative approaches will be required to derive actionable insights on UTI. Results based solely on currently available EHR data need to be interpreted carefully

    The Use of Routinely Collected Data in Clinical Trial Research

    Get PDF
    RCTs are the gold standard for assessing the effects of medical interventions, but they also pose many challenges, including the often-high costs in conducting them and a potential lack of generalizability of their findings. The recent increase in the availability of so called routinely collected data (RCD) sources has led to great interest in their application to support RCTs in an effort to increase the efficiency of conducting clinical trials. We define all RCTs augmented by RCD in any form as RCD-RCTs. A major subset of RCD-RCTs are performed at the point of care using electronic health records (EHRs) and are referred to as point-of-care research (POC-R). RCD-RCTs offer several advantages over traditional trials regarding patient recruitment and data collection, and beyond. Using highly standardized EHR and registry data allows to assess patient characteristics for trial eligibility and to examine treatment effects through routinely collected endpoints or by linkage to other data sources like mortality registries. Thus, RCD can be used to augment traditional RCTs by providing a sampling framework for patient recruitment and by directly measuring patient relevant outcomes. The result of these efforts is the generation of real-world evidence (RWE). Nevertheless, the utilization of RCD in clinical research brings novel methodological challenges, and issues related to data quality are frequently discussed, which need to be considered for RCD-RCTs. Some of the limitations surrounding RCD use in RCTs relate to data quality, data availability, ethical and informed consent challenges, and lack of endpoint adjudication which may all lead to uncertainties in the validity of their results. The purpose of this thesis is to help fill the aforementioned research gaps in RCD-RCTs, encompassing tasks such as assessing their current application in clinical research and evaluating the methodological and technical challenges in performing them. Furthermore, it aims to assess the reporting quality of published reports on RCD-RCTs

    Approaches for combining primary care electronic health record data from multiple sources: a systematic review of observational studies

    Get PDF
    OBJECTIVE: To identify observational studies which used data from more than one primary care electronic health record (EHR) database, and summarise key characteristics including: objective and rationale for using multiple data sources; methods used to manage, analyse and (where applicable) combine data; and approaches used to assess and report heterogeneity between data sources. DESIGN: A systematic review of published studies. DATA SOURCES: Pubmed and Embase databases were searched using list of named primary care EHR databases; supplementary hand searches of reference list of studies were retained after initial screening. STUDY SELECTION: Observational studies published between January 2000 and May 2018 were selected, which included at least two different primary care EHR databases. RESULTS: 6054 studies were identified from database and hand searches, and 109 were included in the final review, the majority published between 2014 and 2018. Included studies used 38 different primary care EHR data sources. Forty-seven studies (44%) were descriptive or methodological. Of 62 analytical studies, 22 (36%) presented separate results from each database, with no attempt to combine them; 29 (48%) combined individual patient data in a one-stage meta-analysis and 21 (34%) combined estimates from each database using two-stage meta-analysis. Discussion and exploration of heterogeneity was inconsistent across studies. CONCLUSIONS: Comparing patterns and trends in different populations, or in different primary care EHR databases from the same populations, is important and a common objective for multi-database studies. When combining results from several databases using meta-analysis, provision of separate results from each database is helpful for interpretation. We found that these were often missing, particularly for studies using one-stage approaches, which also often lacked details of any statistical adjustment for heterogeneity and/or clustering. For two-stage meta-analysis, a clear rationale should be provided for choice of fixed effect and/or random effects or other models

    Machine-Learning Model for Mortality Prediction in Patients With Community-Acquired Pneumonia Development and Validation Study

    Full text link
    Background: Artificial intelligence tools and techniques such as machine learning (ML) are increasingly seen as a suitable manner in which to increase the prediction capacity of currently available clinical tools, including prognostic scores. However, studies evaluating the efficacy of ML methods in enhancing the predictive capacity of existing scores for community-acquired pneumonia (CAP) are limited. We aimed to apply and validate a causal probabilistic network (CPN) model to predict mortality in patients with CAP. Research question: Is a CPN model able to predict mortality in patients with CAP better than the commonly used severity scores? Study design and methods: This was a derivation-validation retrospective study conducted in two Spanish university hospitals. The ability of a CPN designed to predict mortality in sepsis (SepsisFinder [SeF]), and adapted for CAP (SeF-ML), to predict 30-day mortality was assessed and compared with other scoring systems (Pneumonia Severity Index [PSI], Sequential Organ Failure Assessment [SOFA], quick Sequential Organ Failure Assessment [qSOFA], and CURB-65 criteria [confusion, urea, respiratory rate, BP, age ≥ 65 years]). The SeF models are proprietary software. Differences between receiver operating characteristic curves were assessed by the DeLong method for correlated receiver operating characteristic curves. Results: The derivation cohort comprised 4,531 patients, and the validation cohort consisted of 1,034 patients. In the derivation cohort, the areas under the curve (AUCs) of SeF-ML, CURB-65, SOFA, PSI, and qSOFA were 0.801, 0.759, 0.671, 0.799, and 0.642, respectively, for 30-day mortality prediction. In the validation study, the AUC of SeF-ML was 0.826, concordant with the AUC (0.801) in the derivation data (P = .51). The AUC of SeF-ML was significantly higher than those of CURB-65 (0.764; P = .03) and qSOFA (0.729, P = .005). However, it did not differ significantly from those of PSI (0.830; P = .92) and SOFA (0.771; P = .14). Interpretation: SeF-ML shows potential for improving mortality prediction among patients with CAP, using structured health data. Additional external validation studies should be conducted to support generalizability

    Real-Time Electronic Health Record Mortality Prediction During the COVID-19 Pandemic: A Prospective Cohort Study

    Get PDF
    Background: The SARS-CoV-2 virus has infected millions of people, overwhelming critical care resources in some regions. Many plans for rationing critical care resources during crises are based on the Sequential Organ Failure Assessment (SOFA) score. The COVID-19 pandemic created an emergent need to develop and validate a novel electronic health record (EHR)-computable tool to predict mortality. Research Questions: To rapidly develop, validate, and implement a novel real-time mortality score for the COVID-19 pandemic that improves upon SOFA. Study Design and Methods: We conducted a prospective cohort study of a regional health system with 12 hospitals in Colorado between March 2020 and July 2020. All patients >14 years old hospitalized during the study period without a do not resuscitate order were included. Patients were stratified by the diagnosis of COVID-19. From this cohort, we developed and validated a model using stacked generalization to predict mortality using data widely available in the EHR by combining five previously validated scores and additional novel variables reported to be associated with COVID-19-specific mortality. We compared the area under the receiver operator curve (AUROC) for the new model to the SOFA score and the Charlson Comorbidity Index. Results: We prospectively analyzed 27,296 encounters, of which 1,358 (5.0%) were positive for SARS-CoV-2, 4,494 (16.5%) included intensive care unit (ICU)-level care, 1,480 (5.4%) included invasive mechanical ventilation, and 717 (2.6%) ended in death. The Charlson Comorbidity Index and SOFA scores predicted overall mortality with an AUROC of 0.72 and 0.90, respectively. Our novel score predicted overall mortality with AUROC 0.94. In the subset of patients with COVID-19, we predicted mortality with AUROC 0.90, whereas SOFA had AUROC of 0.85. Interpretation: We developed and validated an accurate, in-hospital mortality prediction score in a live EHR for automatic and continuous calculation using a novel model, that improved upon SOFA. Study Question: Can we improve upon the SOFA score for real-time mortality prediction during the COVID-19 pandemic by leveraging electronic health record (EHR) data? Results: We rapidly developed and implemented a novel yet SOFA-anchored mortality model across 12 hospitals and conducted a prospective cohort study of 27,296 adult hospitalizations, 1,358 (5.0%) of which were positive for SARS-CoV-2. The Charlson Comorbidity Index and SOFA scores predicted all-cause mortality with AUROCs of 0.72 and 0.90, respectively. Our novel score predicted mortality with AUROC 0.94. Interpretation: A novel EHR-based mortality score can be rapidly implemented to better predict patient outcomes during an evolving pandemic
    corecore