3,128 research outputs found

    Deep Risk Prediction and Embedding of Patient Data: Application to Acute Gastrointestinal Bleeding

    Get PDF
    Acute gastrointestinal bleeding is a common and costly condition, accounting for over 2.2 million hospital days and 19.2 billion dollars of medical charges annually. Risk stratification is a critical part of initial assessment of patients with acute gastrointestinal bleeding. Although all national and international guidelines recommend the use of risk-assessment scoring systems, they are not commonly used in practice, have sub-optimal performance, may be applied incorrectly, and are not easily updated. With the advent of widespread electronic health record adoption, longitudinal clinical data captured during the clinical encounter is now available. However, this data is often noisy, sparse, and heterogeneous. Unsupervised machine learning algorithms may be able to identify structure within electronic health record data while accounting for key issues with the data generation process: measurements missing-not-at-random and information captured in unstructured clinical note text. Deep learning tools can create electronic health record-based models that perform better than clinical risk scores for gastrointestinal bleeding and are well-suited for learning from new data. Furthermore, these models can be used to predict risk trajectories over time, leveraging the longitudinal nature of the electronic health record. The foundation of creating relevant tools is the definition of a relevant outcome measure; in acute gastrointestinal bleeding, a composite outcome of red blood cell transfusion, hemostatic intervention, and all-cause 30-day mortality is a relevant, actionable outcome that reflects the need for hospital-based intervention. However, epidemiological trends may affect the relevance and effectiveness of the outcome measure when applied across multiple settings and patient populations. Understanding the trends in practice, potential areas of disparities, and value proposition for using risk stratification in patients presenting to the Emergency Department with acute gastrointestinal bleeding is important in understanding how to best implement a robust, generalizable risk stratification tool. Key findings include a decrease in the rate of red blood cell transfusion since 2014 and disparities in access to upper endoscopy for patients with upper gastrointestinal bleeding by race/ethnicity across urban and rural hospitals. Projected accumulated savings of consistent implementation of risk stratification tools for upper gastrointestinal bleeding total approximately $1 billion 5 years after implementation. Most current risk scores were designed for use based on the location of the bleeding source: upper or lower gastrointestinal tract. However, the location of the bleeding source is not always clear at presentation. I develop and validate electronic health record based deep learning and machine learning tools for patients presenting with symptoms of acute gastrointestinal bleeding (e.g., hematemesis, melena, hematochezia), which is more relevant and useful in clinical practice. I show that they outperform leading clinical risk scores for upper and lower gastrointestinal bleeding, the Glasgow Blatchford Score and the Oakland score. While the best performing gradient boosted decision tree model has equivalent overall performance to the fully connected feedforward neural network model, at the very low risk threshold of 99% sensitivity the deep learning model identifies more very low risk patients. Using another deep learning model that can model longitudinal risk, the long-short-term memory recurrent neural network, need for transfusion of red blood cells can be predicted at every 4-hour interval in the first 24 hours of intensive care unit stay for high risk patients with acute gastrointestinal bleeding. Finally, for implementation it is important to find patients with symptoms of acute gastrointestinal bleeding in real time and characterize patients by risk using available data in the electronic health record. A decision rule-based electronic health record phenotype has equivalent performance as measured by positive predictive value compared to deep learning and natural language processing-based models, and after live implementation appears to have increased the use of the Acute Gastrointestinal Bleeding Clinical Care pathway. Patients with acute gastrointestinal bleeding but with other groups of disease concepts can be differentiated by directly mapping unstructured clinical text to a common ontology and treating the vector of concepts as signals on a knowledge graph; these patients can be differentiated using unbalanced diffusion earth mover’s distances on the graph. For electronic health record data with data missing not at random, MURAL, an unsupervised random forest-based method, handles data with missing values and generates visualizations that characterize patients with gastrointestinal bleeding. This thesis forms a basis for understanding the potential for machine learning and deep learning tools to characterize risk for patients with acute gastrointestinal bleeding. In the future, these tools may be critical in implementing integrated risk assessment to keep low risk patients out of the hospital and guide resuscitation and timely endoscopic procedures for patients at higher risk for clinical decompensation

    Using machine learning to predict individual severity estimates of alcohol withdrawal syndrome in patients with alcohol dependence

    Get PDF
    Despite its high prevalence in diverse clinical settings, treatment of alcohol withdrawal syndrome (AWS) is mainly based on subjective clinical opinion. Without reliable predictors of potential harmful AWS outcomes at the individual patient’s level, decisions like provision of pharmacotherapy rely on resource-intensive in-patient monitoring. By contrast, an accurate risk prognosis would enable timely preemptive treatment, open up possibilities for safe out-patient care and lead to a more efficient use of health care resources. The aim of this project was to develop such tools using clinical and patient-reported information easily attainable at patient’s admission. To this end, a machine learning framework incorporating nested cross-validation, ensemble learning, and external validation was developed to retrieve accurate, generalizable prediction models for three meaningful AWS outcomes: (1) Separating mild and more severe AWS as defined by the established AWS scale, and directly identifying patients at risk of (2) delirium tremens as well as (3) withdrawal seizures. Based on 121 sociodemographic, clinical and laboratory-based variables, that were retrieved retrospectively from the patients’ charts, this classification paradigm was used to build predictive models in two cohorts of AWS patients at major detoxification wards in Munich (Ludwig-Maximilian-Universität München, n=389; Technische Universität München, n=805). Moderate to severe AWS cases were predicted with significant balanced accuracy (BAC) in both cohorts (LMU, BAC = 69.4%; TU, BAC = 55.9%). A post-hoc association between the models’ poor outcome predictions and higher clomethiazole doses further added to their clinical validity. While delirium tremens cases were accurately identified in the TU cohort (BAC = 75%), the framework yielded no significant model for withdrawal seizures. Variable importance analyses revealed that predictive patterns highly varied between both treatment sites and withdrawal outcomes. Besides several previously described variables (most notably, low platelet count and cerebral brain lesions), several new predictors were identified (history of blood pressure abnormalities, positive urine-based benzodiazepine screening and years of schooling), emphasizing the utility of data-driven, hypothesis-free prediction approaches. Due to limitations of the datasets as well as site-specific patient characteristics, the models did not generalize across treatment sites, highlighting the need to conduct strict validation procedures before implementing prediction tools in clinical care. In conclusion, this dissertation provides evidence on the utility of machine learning methods to enable personalized risk predictions for AWS severity. More specifically, nested-cross validation and ensemble learning could be used to ensure generalizable, clinically applicable predictions in future prospective research based on multi-center collaboration.Die prädiktive Einschätzung der Ausprägung von Entzugssymptomen bei Patient*innen mit Alkoholabhängigkeit beruht trotz jahrzehntelanger wissenschaftlicher Bemühungen weiterhin auf subjektiver klinischer Einschätzung. Entgiftungsbehandlungen werden daher weltweit vorwiegend im stationären Rahmen durchgeführt, um eine engmaschige klinische Überwachung zu gewährleisten. Da über 90 % der Entzugssyndrome mit lediglich milder vegetativer Symptomatik verlaufen, bindet dieses Vorgehen wertvolle Ressourcen. Datenbasierte Prädiktionstools könnten einen wichtigen Beitrag in Richtung einer individualisierten, akkuraten und verlässlichen Verlaufsbeurteilung leisten. Diese würde sichere ambulante Behandlungskonzepte, prophylaktische medikamentöse Behandlungen von Risikopatient*innen, sowie innovative Behandlungsforschung basierend auf stratifizierten Risikogruppen ermöglichen. Das Ziel dieser Arbeit war die Entwicklung solcher prädiktiven Tools für Patient*innen mit Alkoholentzugssyndrom (AES). Hierfür wurde ein innovatives Machine Learning Paradigma unter Verwendung von strikten Validierungsmethoden (Nested Cross-Validation und Out-of-Sample External Validation) verwendet, um generalisierbare, akkurate Prädiktionsmodelle für drei bedeutsame klinische Endpunkte des AES zu entwickeln: (1) die Klassifikation von milden in Abgrenzung zu moderat bis schwer ausgeprägten AES Verläufen, definiert nach einer hierfür etablierten klinischen Skala (AES Skala), sowie die direkte Identifikation der Komplikationen (2) Delirium tremens (DT) sowie von (3) zerebralen Entzugsanfällen (WS). Dieses Paradigma wurde unter Verwendung von 121 retrospektiv erfassten klinischen, laborbasierten, sowie soziodemographischen Variablen auf 1194 Patient*innen mit Alkoholabhängigkeit an zwei großen Entgiftungsstationen in München angewandt (Ludwig-Maximilian-Universität München, n=389; Technische Universität München, n=805). Moderate bis schwere AES Verläufe konnten an beiden Behandlungszentren mit einer signifikanten Genauigkeit (balanced accuracy [BAC]) prädiziert werden (LMU, BAC = 69.4%; TU, BAC = 55.9%). In einer post-hoc Analyse war die Prädiktion moderater bis schwerer Verläufe zudem mit höheren kumulativen Clomethiazol-Dosen assoziiert, was für die klinische Validität der Modelle spricht. Während DT in der TU Kohorte mit einer hohen Genauigkeit (BAC = 75%) identifiziert werden konnte, war die Prädiktion von Entzugsanfällen nicht erfolgreich. Eine explorative Analyse konnte zeigen, dass die prädiktive Bedeutsamkeit einzelner Variable sowohl zwischen den Behandlungszentren als auch den einzelnen Endpunkten deutlich variierte. Neben mehreren bereits in früheren wissenschaftlichen Arbeiten beschriebenen prädiktiv wertvollen Variablen (insbesondere einer durchschnittlich niedrigeren Thrombozytenzahl im Blut sowie von strukturellen zerebralen Läsionen) konnten hierbei mehrere neue Prädiktoren identifiziert werden (Blutdruckauffälligkeiten in der Vorgeschichte; positives Urinscreening auf Benzodiazepine; Anzahl der Schuljahre). Diese Ergebnisse unterstreichen den Wert von datenbasierten, hypothesen-freien Prädiktionsansätzen. Aufgrund von Limitationen des retrospektiven Datensatzes, wie der fehlenden zentrumsübergreifenden Verfügbarkeit einiger Variablen, sowie klinischen Besonderheiten der beiden Kohorten, ließen sich die Modelle am jeweils anderen Behandlungszentrum nicht validieren. Dieses Ergebnis unterstreicht die Notwendigkeit, die Generalisierbarkeit von Prädiktionsergebnissen adäquat zu testen, bevor hierauf basierende Tools für die klinische Praxis empfohlen werden. Solche Methoden wurden im Rahmen dieser Arbeit erstmalig in einem Forschungsprojekt zum AES verwendet. Zusammenfassend, zeigen die Ergebnisse dieser Dissertation erstmalig einen Nutzen von Machine Learning Ansätzen zur individualisierten Risikoprädiktion schwerer AES Verläufe an. Das hierbei verwendete cross-validierte Machine Learning Paradigma wäre ein mögliches Analyseverfahren, um in künftigen prospektiven Multi-Center-Studien verlässliche Prädikationsergebnisse mit hohem klinischen Anwendungspotential zu erreichen

    Rapid Response Teams versus Critical Care Outreach Teams: Unplanned Escalations in Care and Associated Outcomes

    Get PDF
    The incidence of unplanned escalations during hospitalization is undocumented, but estimates may be as high as 1.2 million occurrences per year in the United States. Rapid Response Teams (RRT) were developed for the early recognition and treatment of deteriorating patients to deliver time-sensitive interventions, but evidence related to optimal activation criteria and structure is limited. The purpose of this study is to determine if an Early Warning Score-based Critical Care Outreach (CCO) model is related to the frequency of unplanned intra-hospital escalations in care compared to a RRT system based on staff nurse identification of vital sign derangements and physical assessments. The RRT model, in which staff nurses identified vital sign derangements to active the system, was compared with the addition of a CCO model, in which rapid response nurses activated the system based on Early Warning Score line graphs of patient condition over time. Logistic regressions were used to examine retrospective data from administrative datasets at a 237-bed community non-teaching hospital during two periods: 1) baseline period, RRT model (n=5,875) (Phase 1: October 1, 2010 – March 31, 2011), and; 2) intervention period, RRT/CCO model (n=6,273). (Phase 2: October 1, 2011 – March 31, 2012). The strongest predictor of unplanned escalations to the Intensive Care Unit was the type of rapid response system model. Unplanned ICU transfers were 1.4 times more likely to occur during the Phase 1 RRT period. In contrast, the type of rapid response model was not a significant predictor when all unplanned escalations (any type) were grouped together (medical-surgical-to-intermediate, medical-surgical-to-ICU and intermediate-to-ICU). This is the first study to report a relationship between unplanned escalations and different rapid response models. Based on the findings of fewer unplanned ICU transfers in the setting of a CCO model, health services researchers and clinicians should consider using automated Early Warning score graphs for hospital-wide surveillance of patient condition as a safety strategy

    Walk This Way: Predicting Postoperative And Discharge Outcomes Among Ambulatory Surgical Patients

    Get PDF
    Within the ambulatory surgical setting, existing risk prediction models focus predominantly on postoperative factors of nausea, vomiting, and pain, but do not uniformly specify preoperative predictors of outcomes across multiple surgical specialties. Identification of preoperative markers, specifically those that are reversible, is key to improving risk stratification and designing patient-specific clinical interventions. Recent work shows that preoperative gait speed is a promising marker of postoperative morbidity and mortality within the inpatient surgical population. However, it remains to be explored whether gait speed is sensitive enough to delineate discharge and postoperative outcomes within the ambulatory surgical population. We sought to determine which specific preoperative factors independently predict discharge readiness outcomes among ambulatory surgical patients. To address this aim and following Institutional Review Board (IRB) approval, we performed a cross-sectional analysis of data from a prospective observational study of 602 ambulatory surgical patients. The primary outcomes were: 1) Time to home discharge readiness from the ambulatory post-anesthesia care unit (PACU), and 2) 24-h postoperative occurrence of nausea, vomiting and bleeding. We evaluated the occurrence of unanticipated admissions from the ambulatory PACU to ancillary care units (inpatient wards and critical care) as a post hoc secondary outcome. Preoperative measures were gait speed (6.1 m divided by the average time to walk 6.10 meters), mean arterial pressure, heart rate, demographic factors and other clinical covariates. Statistical analysis was done with SAS, version 9.2® (Cary, NC), and p\u3c0.05 was considered statistically significant. Participants were 54.3% female, the mean gait speed was 0.90 ± 0.18 m/s, and the median home discharge readiness time was 89 minutes (interquartile range 61-126). Multivariable Cox regression analyses showed that gait speed (≥1 m/s vs. \u3c 1 m/s) was an independent predictor of time to home discharge readiness after adjustment for covariates (adjusted hazard ratio = 1.25 (95% CI: 1.03-1.50), p = 0.02). For the primary outcomes, independent predictors of home discharge readiness ≤90 minutes were: preoperative heart rate, mean arterial pressure, and gait speed (adjusted odds ratio = 2.33 {95% CI: 1.52-3.54}, p\u3c0.0001), when all other covariates are held constant. Monte-Carlo Cross validation (using 2x104 iterations) showed the mean percentage of correctly classified predictions by our model was 65.6 (95% CI: 61.8-69.4). However, gait speed was not independently associated with 24-h postoperative complications, p=0.35. Predictors of unanticipated admissions included the history of cardiac surgery and prior hospitalizations, and gait speed (adjusted odds ratio = 0.54 {95% CI: 0.38-0.82}, p=0.003). We present the first cross-validated prediction model of outcomes in the ambulatory surgical setting and identify preoperative heart rate, mean arterial pressure and gait speed as three important modifiable factors, which independently associate with home discharge readiness time ≤90 minutes. Our findings underscore the importance of preoperative measures and elements of patients\u27 history for potential risk stratification and resource allocation. We conclude that a focus on reversible clinical markers may help identify those patients at risk for delayed discharge in the ambulatory surgical setting

    Artificial intelligence and automation in valvular heart diseases

    Get PDF
    Artificial intelligence (AI) is gradually changing every aspect of social life, and healthcare is no exception. The clinical procedures that were supposed to, and could previously only be handled by human experts can now be carried out by machines in a more accurate and efficient way. The coming era of big data and the advent of supercomputers provides great opportunities to the development of AI technology for the enhancement of diagnosis and clinical decision-making. This review provides an introduction to AI and highlights its applications in the clinical flow of diagnosing and treating valvular heart diseases (VHDs). More specifically, this review first introduces some key concepts and subareas in AI. Secondly, it discusses the application of AI in heart sound auscultation and medical image analysis for assistance in diagnosing VHDs. Thirdly, it introduces using AI algorithms to identify risk factors and predict mortality of cardiac surgery. This review also describes the state-of-the-art autonomous surgical robots and their roles in cardiac surgery and intervention

    Fall prediction using behavioural modelling from sensor data in smart homes.

    Get PDF
    The number of methods for identifying potential fall risk is growing as the rate of elderly fallers continues to rise in the UK. Assessments for identifying risk of falling are usually performed in hospitals and other laboratory environments, however these are costly and cause inconvenience for the subject and health services. Replacing these intrusive testing methods with a passive in-home monitoring solution would provide a less time-consuming and cheaper alternative. As sensors become more readily available, machine learning models can be applied to the large amount of data they produce. This can support activity recognition, falls detection, prediction and risk determination. In this review, the growing complexity of sensor data, the required analysis, and the machine learning techniques used to determine risk of falling are explored. The current research on using passive monitoring in the home is discussed, while the viability of active monitoring using vision-based and wearable sensors is considered. Methods of fall detection, prediction and risk determination are then compared

    Enhancing the Identification of Rheumatoid Arthritis-Associated Lung Disease

    Get PDF
    Rheumatoid arthritis (RA) is a systemic autoimmune disease that predisposes afflicted individuals to reduced quality of life, physical disability, and premature mortality. While joint involvement is the primary manifestation of RA, extra-articular features including lung disease are responsible for a significant portion of the excess mortality. In this dissertation I demonstrate the contribution of chronic lung diseases to premature mortality in RA, contrasting with the more widely recognized comorbidity in RA of cardiovascular disease. Then, I establish that a novel serum biomarker, anti-malondialdehyde-acetaldehyde adduct (MAA) antibody, is associated with the presence of interstitial lung disease (ILD) in RA subjects. Further implicating its role in the pathogenesis of RA-ILD, I will demonstrate the presence of MAA as well as the co-localization of MAA with RA autoantigens and immune effectors cells in the lungs of RA-ILD subjects. Finally, I describe how biomedical informatics algorithms that incorporate multiple ILD diagnosis codes, provider specialty, and diagnostic testing can accurately classify ILD status in RA subjects. Together, these studies advance our ability to identify RA-associated lung diseases across the spectrum of clinical and translational research. These results will pave the way for future clinical and translational research studies to compose biomarker panels that aid in the screening of RA subjects for lung disease, identify pathways that could be targeted for novel therapeutics in RA-ILD, and facilitate the completion of comparative effectiveness and outcomes research studies using real-world data

    Can routinely collected electronic health data be used to develop novel healthcare associated infection surveillance tools?

    Get PDF
    Background: Healthcare associated infections (HCAI) pose a significant burden to health systems both within the UK and internationally. Surveillance is an essential component to any infection control programme, however traditional surveillance systems are time consuming and costly. Large amounts of electronic routine data are collected within the English NHS, yet these are not currently exploited for HCAI surveillance. Aim: To investigate whether routinely collected electronic hospital data can be exploited for HCAI surveillance within the NHS. Methods: This thesis made use of local linked electronic health data from Imperial College Healthcare NHS Trust, including information on patient admissions, discharges, diagnoses, procedures, laboratory tests, diagnostic imaging requests and traditional infection surveillance data. To establish the evidence base on surveillance and risks of HCAI, two literature reviews were carried out. Based on these, three types of innovative surveillance tools were generated and assessed for their utility and applicability. Results: The key findings were firstly the emerging importance of automated and syndromic surveillance in infection surveillance, but the lack of investigation and application of these tools within the NHS. Syndromic surveillance of surgical site infections was successful in coronary artery bypass graft patients; however it was an inappropriate methodology for caesarean section patients. Automated case detection of healthcare associated urinary tract infections, based on electronic microbiology data, demonstrated similar rates of infection to those recorded during a point prevalence survey. Routine administrative data demonstrated mixed utility in the creation of simplified risk scores or infection, with poorly performing risk models of surgical site infections but reasonable model fit for HCA UTI. Conclusion: Whilst in principle routine administrative data can be used to generate novel surveillance tools for healthcare associated infections; in reality it is not yet practical within the IT infrastructure of the NHS
    • …
    corecore