49 research outputs found

    The Quality Application of Deep Learning in Clinical Outcome Predictions Using Electronic Health Record Data: A Systematic Review

    Get PDF
    Introduction: Electronic Health Record (EHR) is a significant source of medical data that can be used to develop predictive modelling with therapeutically useful outcomes. Predictive modelling using EHR data has been increasingly utilized in healthcare, achieving outstanding performance and improving healthcare outcomes. Objectives: The main goal of this review study is to examine different deep learning approaches and techniques used to EHR data processing. Methods: To find possibly pertinent articles that have used deep learning on EHR data, the PubMed database was searched. Using EHR data, we assessed and summarized deep learning performance in a number of clinical applications that focus on making specific predictions about clinical outcomes, and we compared the outcomes with those of conventional machine learning models. Results: For this study, a total of 57 papers were chosen. There have been five identified clinical outcome predictions: illness (n=33), intervention (n=6), mortality (n=5), Hospital readmission (n=7), and duration of stay (n=1). The majority of research (39 out of 57) used structured EHR data. RNNs were used as deep learning models the most frequently (LSTM: 17 studies, GRU: 6 research). The analysis shows that deep learning models have excelled when applied to a variety of clinical outcome predictions. While deep learning's application to EHR data has advanced rapidly, it's crucial that these models remain reliable, offering critical insights to assist clinicians in making informed decision. Conclusions: The findings demonstrate that deep learning can outperform classic machine learning techniques since it has the advantage of utilizing extensive and sophisticated datasets, such as longitudinal data seen in EHR. We think that deep learning will keep expanding because it has been quite successful in enhancing healthcare outcomes utilizing EHR data

    Temporal convolution attention model for sepsis clinical assistant diagnosis prediction

    Get PDF
    Sepsis is an organ failure disease caused by an infection acquired in an intensive care unit (ICU), which leads to a high mortality rate. Developing intelligent monitoring and early warning systems for sepsis is a key research area in the field of smart healthcare. Early and accurate identification of patients at high risk of sepsis can help doctors make the best clinical decisions and reduce the mortality rate of patients with sepsis. However, the scientific understanding of sepsis remains inadequate, leading to slow progress in sepsis research. With the accumulation of electronic medical records (EMRs) in hospitals, data mining technologies that can identify patient risk patterns from the vast amount of sepsis-related EMRs and the development of smart surveillance and early warning models show promise in reducing mortality. Based on the Medical Information Mart for Intensive Care â…¢, a massive dataset of ICU EMRs published by MIT and Beth Israel Deaconess Medical Center, we propose a Temporal Convolution Attention Model for Sepsis Clinical Assistant Diagnosis Prediction (TCASP) to predict the incidence of sepsis infection in ICU patients. First, sepsis patient data is extracted from the EMRs. Then, the incidence of sepsis is predicted based on various physiological features of sepsis patients in the ICU. Finally, the TCASP model is utilized to predict the time of the first sepsis infection in ICU patients. The experiments show that the proposed model achieves an area under the receiver operating characteristic curve (AUROC) score of 86.9% (an improvement of 6.4% ) and an area under the precision-recall curve (AUPRC) score of 63.9% (an improvement of 3.9% ) compared to five state-of-the-art models

    Real-world evidence for the management of blood glucose in the intensive care unit

    Full text link
    Glycaemic control is a core aspect of patient management in the intensive care unit (ICU). Blood glucose has a well-known U-shaped relationship with mortality and morbidity in ICU patients, with both hypo- and hyper-glycaemia associated with poor patient outcomes. As a result, up to 40-90% of ICU patients receive insulin, depending on illness severity and variation in clinical practice. Generally, clinical guidelines for glycaemic control are based on a series of trials that culminated in the NICE-SUGAR study in 2009, a multicentre study demonstrating that tight glycaemic control (a target of 80-110 mg/dL) did not improve patient outcomes compared to moderate control (<180 mg/dL). However, there remain open questions around the potential for more personalised blood glucose management, which real-world evidence sources such as electronic medical records (EMRs) can play a role in answering. This thesis investigates the role that EMRs can play in glycaemic control in the ICU using open access EMR databases, covering a heterogenous 208 hospital USA based patient cohort (the eICU collaborative research database, eICU-CRD) and a large tertiary medical centre in Boston, USA (MIMIC-III and MIMIC-IV). This thesis covers: i) curation and characterisation of the eICU-CRD cohort as a data resource for real-world evidence in glycaemic control; ii) investigation of whether blood lactate modifies the relationship between blood glucose and patient outcome across different subgroups; and iii) the development and comparison of machine learning and deep learning probabilistic forecasting algorithms for blood glucose. The analysis of the eICU-CRD demonstrated that there is wide variety in clinical practice around glycaemic control in the ICU. The results enable comparison with other data resources and assessment of the suitability of the eICU-CRD for addressing specific research questions related to glycaemic control and nutrition support. Informed by this descriptive analysis, the eICU-CRD was used to examine whether blood lactate modifies the relationship between blood glucose and patient outcome across different subgroups. While adjustment for blood lactate attenuated the relationship between blood glucose and patient outcome, blood glucose remained a marker of poor prognosis. Diabetic status was found to influence this relationship, in line with increasing evidence that diabetics and non-diabetics should be considered distinct populations for the purpose of glycaemic control in the ICU. The forecasting algorithms developed using MIMIC-III and MIMIC-IV were designed to account for the intrinsic statistical difficulties present in EMRs. These include large numbers of potentially sparsely and irregularly measured input variables. The focus was on development of probabilistic approaches given the measurement error in blood glucose measures, and their potential conversion into categorical forecasts if required. Two alternative approaches were proposed. The first was to use gradient boosted tree (GBT) algorithms, along with extensive feature engineering. The second was to use continuous time recurrent neural networks (CTRNNs), which learn their own hidden features and account for irregular measurements through evolving the model hidden state using continuous time dynamics. However, several CTRNN architectures are outperformed by an autoregressive GBT model (Catboost), with only a long short-term memory (LSTM) and neural ODE based architecture (ODE-LSTM) achieving comparable performance on probabilistic forecasting metrics such as the continuous ranked probability score (ODE-LSTM: 0.118±0.001; Catboost: 0.118±0.001), ignorance score (0.152±0.008; 0.149±0.002) and interval score (175±1; 176±1). Further, the GBT method was far easier and faster to train, highlighting the importance of using appropriate non-deep learning benchmarks in the academic literature on novel statistical methodologies for analysis of EMRs. The findings highlight that EMRs are a valuable resource in medical evidence generation and characterisation of current clinical practice. Future research should aim to continue investigation of subgroup differences and utilise the forecasting algorithms as part of broader goals such as development of personalised insulin recommendation algorithms

    Rethinking Human-AI Collaboration in Complex Medical Decision Making: A Case Study in Sepsis Diagnosis

    Full text link
    Today's AI systems for medical decision support often succeed on benchmark datasets in research papers but fail in real-world deployment. This work focuses on the decision making of sepsis, an acute life-threatening systematic infection that requires an early diagnosis with high uncertainty from the clinician. Our aim is to explore the design requirements for AI systems that can support clinical experts in making better decisions for the early diagnosis of sepsis. The study begins with a formative study investigating why clinical experts abandon an existing AI-powered Sepsis predictive module in their electrical health record (EHR) system. We argue that a human-centered AI system needs to support human experts in the intermediate stages of a medical decision-making process (e.g., generating hypotheses or gathering data), instead of focusing only on the final decision. Therefore, we build SepsisLab based on a state-of-the-art AI algorithm and extend it to predict the future projection of sepsis development, visualize the prediction uncertainty, and propose actionable suggestions (i.e., which additional laboratory tests can be collected) to reduce such uncertainty. Through heuristic evaluation with six clinicians using our prototype system, we demonstrate that SepsisLab enables a promising human-AI collaboration paradigm for the future of AI-assisted sepsis diagnosis and other high-stakes medical decision making.Comment: Under submission to CHI202

    Utilizing Temporal Information in The EHR for Developing a Novel Continuous Prediction Model

    Get PDF
    Type 2 diabetes mellitus (T2DM) is a nation-wide prevalent chronic condition, which includes direct and indirect healthcare costs. T2DM, however, is a preventable chronic condition based on previous clinical research. Many prediction models were based on the risk factors identified by clinical trials. One of the major tasks of the T2DM prediction models is to estimate the risks for further testing by HbA1c or fasting plasma glucose to determine whether the patient has or does not have T2DM because nation-wide screening is not cost-effective. Those models had substantial limitations on data quality, such as missing values. In this dissertation, I tested the conventional models which were based on the most widely used risk factors to predict the possibility of developing T2DM. The AUC was an average of 0.5, which implies the conventional model cannot be used to screen for T2DM risks. Based on this result, I further implemented three types of temporal representations, including non-temporal representation, interval-temporal representation, and continuous-temporal representation for building the T2DM prediction model. According to the results, continuous-temporal representation had the best performance. Continuous-temporal representation was based on deep learning methods. The result implied that the deep learning method could overcome the data quality issue and could achieve better performance. This dissertation also contributes to a continuous risk output model based on the seq2seq model. This model can generate a monotonic increasing function for a given patient to predict the future probability of developing T2DM. The model is workable but still has many limitations to overcome. Finally, this dissertation demonstrates some risks factors which are underestimated and are worthy for further research to revise the current T2DM screening guideline. The results were still preliminary. I need to collaborate with an epidemiologist and other fields to verify the findings. In the future, the methods for building a T2DM prediction model can also be used for other prediction models of chronic conditions
    corecore