25 research outputs found

    Prediction of Acute Kidney Injury With a Machine Learning Algorithm Using Electronic Health Record Data

    No full text
    Background: A major problem in treating acute kidney injury (AKI) is that clinical criteria for recognition are markers of established kidney damage or impaired function; treatment before such damage manifests is desirable. Clinicians could intervene during what may be a crucial stage for preventing permanent kidney injury if patients with incipient AKI and those at high risk of developing AKI could be identified. Objective: In this study, we evaluate a machine learning algorithm for early detection and prediction of AKI. Design: We used a machine learning technique, boosted ensembles of decision trees, to train an AKI prediction tool on retrospective data taken from more than 300 000 inpatient encounters. Setting: Data were collected from inpatient wards at Stanford Medical Center and intensive care unit patients at Beth Israel Deaconess Medical Center. Patients: Patients older than the age of 18 whose hospital stays lasted between 5 and 1000 hours and who had at least one documented measurement of heart rate, respiratory rate, temperature, serum creatinine (SCr), and Glasgow Coma Scale (GCS). Measurements: We tested the algorithm’s ability to detect AKI at onset and to predict AKI 12, 24, 48, and 72 hours before onset. Methods: We tested AKI detection and prediction using the National Health Service (NHS) England AKI Algorithm as a gold standard. We additionally tested the algorithm’s ability to detect AKI as defined by the Kidney Disease: Improving Global Outcomes (KDIGO) guidelines. We compared the algorithm’s 3-fold cross-validation performance to the Sequential Organ Failure Assessment (SOFA) score for AKI identification in terms of area under the receiver operating characteristic (AUROC). Results: The algorithm demonstrated high AUROC for detecting and predicting NHS-defined AKI at all tested time points. The algorithm achieves AUROC of 0.872 (95% confidence interval [CI], 0.867-0.878) for AKI detection at time of onset. For prediction 12 hours before onset, the algorithm achieves an AUROC of 0.800 (95% CI, 0.792-0.809). For 24-hour predictions, the algorithm achieves AUROC of 0.795 (95% CI, 0.785-0.804). For 48-hour and 72-hour predictions, the algorithm achieves AUROC values of 0.761 (95% CI, 0.753-0.768) and 0.728 (95% CI, 0.719-0.737), respectively. Limitations: Because of the retrospective nature of this study, we cannot draw any conclusions about the impact the algorithm’s predictions will have on patient outcomes in a clinical setting. Conclusions: The results of these experiments suggest that a machine learning–based AKI prediction tool may offer important prognostic capabilities for determining which patients are likely to suffer AKI, potentially allowing clinicians to intervene before kidney damage manifests

    Optimizing Clinical Decision Support in the Electronic Health Record

    No full text
    ObjectiveAdoption of clinical decision support (CDS) tools by clinicians is often limited by workflow barriers. We sought to assess characteristics associated with clinician use of an electronic health record-embedded clinical decision support system (CDSS).MethodsIn a prospective study on emergency department (ED) activation of a CDSS tool across 14 hospitals between 9/1/14 to 4/30/15, the CDSS was deployed at 10 active sites with an on-site champion, education sessions, iterative feedback, and up to 3 gift cards/clinician as an incentive. The tool was also deployed at 4 passive sites that received only an introductory educational session. Activation of the CDSS - which calculated the Pulmonary Embolism Severity Index (PESI) score and provided guidance - and associated clinical data were collected prospectively. We used multivariable logistic regression with random effects at provider/facility levels to assess the association between activation of the CDSS tool and characteristics at: 1) patient level (PESI score), 2) provider level (demographics and clinical load at time of activation opportunity), and 3) facility level (active vs. passive site, facility ED volume, and ED acuity at time of activation opportunity).ResultsOut of 662 eligible patient encounters, the CDSS was activated in 55%: active sites: 68% (346/512); passive sites 13% (20/150). In bivariate analysis, active sites had an increase in activation rates based on the number of prior gift cards the physician had received (96% if 3 prior cards versus 60% if 0, p<0.0001). At passive sites, physicians < age 40 had higher rates of activation (p=0.03). In multivariable analysis, active site status, low ED volume at the time of diagnosis and PESI scores I or II (compared to III or higher) were associated with higher likelihood of CDSS activation.ConclusionsPerforming on-site tool promotion significantly increased odds of CDSS activation. Optimizing CDSS adoption requires active education

    Use of an Electronic Medical Record “Dotphrase” Data Template for a Prospective Head Injury Study

    No full text
    Introduction: The adoption of electronic medical records (EMRs) in emergency departments (EDs)has changed the way that healthcare information is collected, charted, and stored. A challenge forresearchers is to determine how EMRs may be leveraged to facilitate study data collection efforts. Ourobjective is to describe the use of a unique data collection system leveraging EMR technology and tocompare data entry error rates to traditional paper data collection.Methods: This was a retrospective review of data collection methods during a multicenter studyof ED, anti-coagulated, head injury patients. On-shift physicians at 4 centers enrolled patients andprospectively completed data forms. These physicians had the option of completing a paper data formor an electronic “dotphrase” (DP) data form. A feature of our Epic®-based EMR is the ability to useDPs to assist in medical information entry. A DP is a preset template that may be inserted into the EMRwhen the physician types a period followed by a code phrase (in this case “.ichstudy”). Once the studyDP was inserted at the bottom of the electronic ED note, it prompted enrolling physicians to answerstudy questions. Investigators then extracted data directly from the EMR.Results: From July 2009 through December 2010, we enrolled 883 patients. DP data forms were usedin 288 (32.6%; 95% confidence interval [CI] 29.5, 35.7%) cases and paper data forms in 595 (67.4%;95% CI 64.3, 70.5%). Sixty-six (43.7%; 95% CI 35.8, 51.6%) of 151 physicians enrolling patients usedDP data entry at least once. Using multivariate analysis, we found no association between physicianage, gender, or tenure and DP use. Data entry errors were more likely on paper forms (234/595,39.3%; 95% CI 35.4, 43.3%) than DP forms (19/288, 6.6%; 95% CI 3.7, 9.5%), difference in error rates32.7% (95% CI 27.9, 37.6%, P < 0.001).Conclusion: DP data collection is a feasible means of data collection. DP data forms maintain all studydata within the secure EMR environment, obviating the need to maintain and collect paper data forms.This innovation was embraced by many of our emergency physicians and resulted in lower data entryerror rates

    Use of an Electronic Medical Record “Dotphrase” Data Template for a Prospective Head Injury Study

    Get PDF
    Introduction: The adoption of electronic medical records (EMRs) in emergency departments (EDs) has changed the way that healthcare information is collected, charted, and stored. A challenge for researchers is to determine how EMRs may be leveraged to facilitate study data collection efforts. Our objective is to describe the use of a unique data collection system leveraging EMR technology and to compare data entry error rates to traditional paper data collection. Methods: This was a retrospective review of data collection methods during a multicenter study of ED, anti-coagulated, head injury patients. On-shift physicians at 4 centers enrolled patients and prospectively completed data forms. These physicians had the option of completing a paper data form or an electronic “dotphrase” (DP) data form. A feature of our Epic®-based EMR is the ability to use DPs to assist in medical information entry. A DP is a preset template that may be inserted into the EMR when the physician types a period followed by a code phrase (in this case “.ichstudy”). Once the study DP was inserted at the bottom of the electronic ED note, it prompted enrolling physicians to answer study questions. Investigators then extracted data directly from the EMR. Results: From July 2009 through December 2010, we enrolled 883 patients. DP data forms were used in 288 (32.6%; 95% confidence interval [CI] 29.5, 35.7%) cases and paper data forms in 595 (67.4%; 95% CI 64.3, 70.5%). Sixty-six (43.7%; 95% CI 35.8, 51.6%) of 151 physicians enrolling patients used DP data entry at least once. Using multivariate analysis, we found no association between physician age, gender, or tenure and DP use. Data entry errors were more likely on paper forms (234/595, 39.3%; 95% CI 35.4, 43.3%) than DP forms (19/288, 6.6%; 95% CI 3.7, 9.5%), difference in error rates 32.7% (95% CI 27.9, 37.6%, P &lt; 0.001). Conclusion: DP data collection is a feasible means of data collection. DP data forms maintain all study data within the secure EMR environment, obviating the need to maintain and collect paper data forms. This innovation was embraced by many of our emergency physicians and resulted in lower data entry error rates. [West J Emerg Med 2013;14(2):109-113.

    Development and Validation of a Risk Equation for Appendicitis in Children Presenting With Abdominal Pain

    No full text
    Background: Appendicitis is a common surgical emergency in children, yet the diagnosis remains challenging. A widely used risk score, the Pediatric Appendicitis Score, is not sufficiently sensitive or specific to be used alone, with many patients classified as “intermediate risk.” Goals of this study were to develop and validate an improved appendicitis risk calculator for children with acute abdominal pain to aid in clinical decision-making. Methods: We developed our risk calculator using data from a multicenter cohort of children 5 to 18 years old presenting to the emergency department (ED) with acute abdominal pain. We validated the risk calculator in two independent cohorts with similar enrollment criteria. Patient history, physical examination and laboratory data were prospectively recorded by clinicians during ED visits. Appendicitis was confirmed by pathology reports and follow-up telephone survey. Variables evaluated for inclusion in the risk calculator were: age, sex, pain duration, pain with walking, migration of pain, temperature, heart rate, guarding, maximal tenderness in right lower quadrant, white blood cell count, and absolute neutrophil count. A step-wise regression approach was followed to select the best model, using Akaike information criteria and the C-statistic. We forced inclusion of age and sex, including first-order interaction terms. Laboratory values were evaluated for nonlinear associations with appendicitis, and a two-step linear association was included. Validation included calibration and discrimination analyses. Results: The development sample included 2,423 children, of whom 40% had appendicitis; the validation sample included 1,426, and 35% had appendicitis. Our final risk calculator included sex, age, duration of pain, guarding, migration of pain and absolute neutrophil count. In the validation sample, calibration plot and Hosmer and Lemeshow test (P \u3c 0.0001) showed high calibration and a high discrimination (C-statistic = 0.86). Among 248 (17%) patients in the validation sample at \u3c 5% predicted risk, we observed 4% had appendicitis. Of an additional 318 (22%) patients with predicted risk 5% to \u3c 15%, appendicitis occurred in 8%. Of 48 (3.4%) patients in the validation sample at predicted risk of \u3e 90%, 96% had appendicitis. Conclusion: Our validated pediatric appendicitis risk calculator can accurately quantify risk for appendicitis and can identify children with acute abdominal pain at high or low risk for appendicitis
    corecore