1,485 research outputs found

    Blood Glucose Forecasting using LSTM Variants under the Context of Open Source Artificial Pancreas System

    Get PDF
    High accuracy of blood glucose prediction over the long term is essential for preventative diabetes management. The emerging closed-loop insulin delivery system such as the artificial pancreas system (APS) provides opportunities for improved glycaemic control for patients with type 1 diabetes. Existing blood glucose studies are proven effective only within 30 minutes but the accuracy deteriorates drastically when the prediction horizon increases to 45 minutes and 60 minutes. Deep learning, especially for long short term memory (LSTM) and its variants have recently been applied in various areas to achieve state-of-the-art results in tasks with complex time series data. In this study, we present deep LSTM based models that are capable of forecasting long term blood glucose levels with improved prediction and clinical accuracy. We evaluate our approach using 20 cases(878,000 glucose values) from Open Source Artificial Pancreas System (OpenAPS). On 30-minutes and 45-minutes prediction, our Stacked-LSTM achieved the best performance with Root-Mean-Square-Error (RMSE) marks 11.96 & 15.81 and Clark-Grid-ZoneA marks 0.887 & 0.784. In terms of 60-minutes prediction, our ConvLSTM has the best performance with RMSE = 19.6 and Clark-Grid-ZoneA=0.714. Our models outperform existing methods in both prediction and clinical accuracy. This research can hopefully support patients with type 1 diabetes to better manage their behavior in a more preventative way and can be used in future real APS context

    Automatic Detection of Hypoglycemic Events From the Electronic Health Record Notes of Diabetes Patients: Empirical Study

    Get PDF
    BACKGROUND: Hypoglycemic events are common and potentially dangerous conditions among patients being treated for diabetes. Automatic detection of such events could improve patient care and is valuable in population studies. Electronic health records (EHRs) are valuable resources for the detection of such events. OBJECTIVE: In this study, we aim to develop a deep-learning-based natural language processing (NLP) system to automatically detect hypoglycemic events from EHR notes. Our model is called the High-Performing System for Automatically Detecting Hypoglycemic Events (HYPE). METHODS: Domain experts reviewed 500 EHR notes of diabetes patients to determine whether each sentence contained a hypoglycemic event or not. We used this annotated corpus to train and evaluate HYPE, the high-performance NLP system for hypoglycemia detection. We built and evaluated both a classical machine learning model (ie, support vector machines [SVMs]) and state-of-the-art neural network models. RESULTS: We found that neural network models outperformed the SVM model. The convolutional neural network (CNN) model yielded the highest performance in a 10-fold cross-validation setting: mean precision=0.96 (SD 0.03), mean recall=0.86 (SD 0.03), and mean F1=0.91 (SD 0.03). CONCLUSIONS: Despite the challenges posed by small and highly imbalanced data, our CNN-based HYPE system still achieved a high performance for hypoglycemia detection. HYPE can be used for EHR-based hypoglycemia surveillance and population studies in diabetes patients

    Basal Glucose Control in Type 1 Diabetes using Deep Reinforcement Learning: An In Silico Validation

    Get PDF
    People with Type 1 diabetes (T1D) require regular exogenous infusion of insulin to maintain their blood glucose concentration in a therapeutically adequate target range. Although the artificial pancreas and continuous glucose monitoring have been proven to be effective in achieving closed-loop control, significant challenges still remain due to the high complexity of glucose dynamics and limitations in the technology. In this work, we propose a novel deep reinforcement learning model for single-hormone (insulin) and dual-hormone (insulin and glucagon) delivery. In particular, the delivery strategies are developed by double Q-learning with dilated recurrent neural networks. For designing and testing purposes, the FDA-accepted UVA/Padova Type 1 simulator was employed. First, we performed long-term generalized training to obtain a population model. Then, this model was personalized with a small data-set of subject-specific data. In silico results show that the single and dual-hormone delivery strategies achieve good glucose control when compared to a standard basal-bolus therapy with low-glucose insulin suspension. Specifically, in the adult cohort (n=10), percentage time in target range [70, 180] mg/dL improved from 77.6% to 80.9% with single-hormone control, and to 85.6%85.6\% with dual-hormone control. In the adolescent cohort (n=10), percentage time in target range improved from 55.5% to 65.9% with single-hormone control, and to 78.8% with dual-hormone control. In all scenarios, a significant decrease in hypoglycemia was observed. These results show that the use of deep reinforcement learning is a viable approach for closed-loop glucose control in T1D

    Machine Learning for Physiological Time Series: Representing and Controlling Blood Glucose for Diabetes Management

    Full text link
    Type 1 diabetes is a chronic health condition affecting over one million patients in the US, where blood glucose (sugar) levels are not well regulated by the body. Researchers have sought to use physiological data (e.g., blood glucose measurements) collected from wearable devices to manage this disease, either by forecasting future blood glucose levels for predictive alarms, or by automating insulin delivery for blood glucose management. However, the application of machine learning (ML) to these data is hampered by latent context, limited supervision and complex temporal dependencies. To address these challenges, we develop and evaluate novel ML approaches in the context of i) representing physiological time series, particularly for forecasting blood glucose values and ii) decision making for when and how much insulin to deliver. When learning representations, we leverage the structure of the physiological sequence as an implicit information stream. In particular, we a) incorporate latent context when predicting adverse events by jointly modeling patterns in the data and the context those patterns occurred under, b) propose novel types of self-supervision to handle limited data and c) propose deep models that predict functions underlying trajectories to encode temporal dependencies. In the context of decision making, we use reinforcement learning (RL) for blood glucose management. Through the use of an FDA-approved simulator of the glucoregulatory system, we achieve strong performance using deep RL with and without human intervention. However, the success of RL typically depends on realistic simulators or experimental real-world deployment, neither of which are currently practical for problems in health. Thus, we propose techniques for leveraging imperfect simulators and observational data. Beyond diabetes, representing and managing physiological signals is an important problem. By adapting techniques to better leverage the structure inherent in the data we can help overcome these challenges.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163134/1/ifox_1.pd

    Ahead of time prediction of nocturnal hypoglycemic events from Continuous Glucose Monitoring data in people with type I diabetes by Machine Learning-based approaches

    Get PDF
    Type 1 diabetes can lead to short and long term complications due to a bad control of the blood glucose level inside safe limits, called euglycemic range. Continuous Glucose Monitoring (CGM) systems allow diabetic people having a better management of the glycemia and encourage the development of predction algorithms. This thesis work aimed to develop a method to predict nocturnal hypoglycemic events using only the CGM data collected in the previous day exploiting machine-learning approaches

    Multi-modal Predictive Models of Diabetes Progression

    Full text link
    With the increasing availability of wearable devices, continuous monitoring of individuals' physiological and behavioral patterns has become significantly more accessible. Access to these continuous patterns about individuals' statuses offers an unprecedented opportunity for studying complex diseases and health conditions such as type 2 diabetes (T2D). T2D is a widely common chronic disease that its roots and progression patterns are not fully understood. Predicting the progression of T2D can inform timely and more effective interventions to prevent or manage the disease. In this study, we have used a dataset related to 63 patients with T2D that includes the data from two different types of wearable devices worn by the patients: continuous glucose monitoring (CGM) devices and activity trackers (ActiGraphs). Using this dataset, we created a model for predicting the levels of four major biomarkers related to T2D after a one-year period. We developed a wide and deep neural network and used the data from the demographic information, lab tests, and wearable sensors to create the model. The deep part of our method was developed based on the long short-term memory (LSTM) structure to process the time-series dataset collected by the wearables. In predicting the patterns of the four biomarkers, we have obtained a root mean square error of 1.67% for HBA1c, 6.22 mg/dl for HDL cholesterol, 10.46 mg/dl for LDL cholesterol, and 18.38 mg/dl for Triglyceride. Compared to existing models for studying T2D, our model offers a more comprehensive tool for combining a large variety of factors that contribute to the disease
    corecore