6,128 research outputs found

    Artificial neural network algorithm for online glucose prediction from continuous glucose monitoring.

    Get PDF
    Background and Aims: Continuous glucose monitoring (CGM) devices could be useful for real-time management of diabetes therapy. In particular, CGM information could be used in real time to predict future glucose levels in order to prevent hypo-/hyperglycemic events. This article proposes a new online method for predicting future glucose concentration levels from CGM data. Methods: The predictor is implemented with an artificial neural network model (NNM). The inputs of the NNM are the values provided by the CGM sensor during the preceding 20 min, while the output is the prediction of glucose concentration at the chosen prediction horizon (PH) time. The method performance is assessed using datasets from two different CGM systems (nine subjects using the Medtronic [Northridge, CA] Guardian¼ and six subjects using the Abbott [Abbott Park, IL] Navigator¼). Three different PHs are used: 15, 30, and 45 min. The NNM accuracy has been estimated by using the root mean square error (RMSE) and prediction delay. Results: The RMSE is around 10, 18, and 27 mg/dL for 15, 30, and 45 min of PH, respectively. The prediction delay is around 4, 9, and 14 min for upward trends and 5, 15, and 26 min for downward trends, respectively. A comparison with a previously published technique, based on an autoregressive model (ARM), has been performed. The comparison shows that the proposed NNM is more accurate than the ARM, with no significant deterioration in the prediction delay

    “Smart” Continuous Glucose Monitoring Sensors: On-Line Signal Processing Issues

    Get PDF
    The availability of continuous glucose monitoring (CGM) sensors allows development of new strategies for the treatment of diabetes. In particular, from an on-line perspective, CGM sensors can become “smart” by providing them with algorithms able to generate alerts when glucose concentration is predicted to exceed the normal range thresholds. To do so, at least four important aspects have to be considered and dealt with on-line. First, the CGM data must be accurately calibrated. Then, CGM data need to be filtered in order to enhance their signal-to-noise ratio (SNR). Thirdly, predictions of future glucose concentration should be generated with suitable modeling methodologies. Finally, generation of alerts should be done by minimizing the risk of detecting false and missing true events. For these four challenges, several techniques, with various degrees of sophistication, have been proposed in the literature and are critically reviewed in this paper

    Blood Glucose Prediction Algorithms for Hypoglycemic and/or Hyperglycemic Alerts

    Get PDF
    Continuous glucose monitoring (CGM) sensors able to monitor blood glucose concentration continuously (i.e. with a reading every 1-5 min) for several days (up to 7 consecutive days), entered clinical research. The availability of continuous glucose monitoring (CGM) sensors allows development of new strategies for the treatment of diabetes. CGM sensors are of two types, noninvasive (NI-CGM) or minimally-invasive (MI-CGM). Irrespective of the type, CGM sensors can become smart by providing them with algorithms able to generate alerts, say, 20-30 min ahead of time, when glucose concentration is predicted to exceed the normal range thresholds (70-180 mg/dL). Such alerts would allow diabetes patients to take precautionary measures to prevent hypo/hyperglycemia. In this paper we review blood glucose prediction algorithms such as first-order autoregressive( AR(1) ), Kalman Filtering and Feed Forward Neural Network. All these algorithms have demonstrated that blood glucose can be predicted ahead in time

    Continuous glucose monitoring sensors: Past, present and future algorithmic challenges

    Get PDF
    Continuous glucose monitoring (CGM) sensors are portable devices that allow measuring and visualizing the glucose concentration in real time almost continuously for several days and are provided with hypo/hyperglycemic alerts and glucose trend information. CGM sensors have revolutionized Type 1 diabetes (T1D) management, improving glucose control when used adjunctively to self-monitoring blood glucose systems. Furthermore, CGM devices have stimulated the development of applications that were impossible to create without a continuous-time glucose signal, e.g., real-time predictive alerts of hypo/hyperglycemic episodes based on the prediction of future glucose concentration, automatic basal insulin attenuation methods for hypoglycemia prevention, and the artificial pancreas. However, CGM sensors’ lack of accuracy and reliability limited their usability in the clinical practice, calling upon the academic community for the development of suitable signal processing methods to improve CGM performance. The aim of this paper is to review the past and present algorithmic challenges of CGM sensors, to show how they have been tackled by our research group, and to identify the possible future ones

    A deep learning approach to diabetic blood glucose prediction

    Full text link
    We consider the question of 30-minute prediction of blood glucose levels measured by continuous glucose monitoring devices, using clinical data. While most studies of this nature deal with one patient at a time, we take a certain percentage of patients in the data set as training data, and test on the remainder of the patients; i.e., the machine need not re-calibrate on the new patients in the data set. We demonstrate how deep learning can outperform shallow networks in this example. One novelty is to demonstrate how a parsimonious deep representation can be constructed using domain knowledge

    Predicting Blood Glucose with an LSTM and Bi-LSTM Based Deep Neural Network

    Full text link
    A deep learning network was used to predict future blood glucose levels, as this can permit diabetes patients to take action before imminent hyperglycaemia and hypoglycaemia. A sequential model with one long-short-term memory (LSTM) layer, one bidirectional LSTM layer and several fully connected layers was used to predict blood glucose levels for different prediction horizons. The method was trained and tested on 26 datasets from 20 real patients. The proposed network outperforms the baseline methods in terms of all evaluation criteria.Comment: 5 pages, submitted to 2018 14th Symposium on Neural Networks and Applications (NEUREL

    Modeling and Prediction in Diabetes Physiology

    Get PDF
    Diabetes is a group of metabolic diseases characterized by the inability of the organism to autonomously regulate the blood glucose levels. It requires continuing medical care to prevent acute complications and to reduce the risk of long-term complications. Inadequate glucose control is associated with damage, dysfunction and failure of various organs. The management of the disease is non trivial and demanding. With today’s standards of current diabetes care, good glucose regulation needs constant attention and decision-making by the individuals with diabetes. Empowering the patients with a decision support system would, therefore, improve their quality of life without additional burdens nor replacing human expertise. This thesis investigates the use of data-driven techniques to the purpose of glucose metabolism modeling and short-term blood-glucose predictions in Type I Diabetes Mellitus (T1DM). The goal was to use models and predictors in an advisory tool able to produce personalized short-term blood glucose predictions and on-the-spot decision making concerning the most adequate choice of insulin delivery, meal intake and exercise, to help diabetic subjects maintaining glycemia as close to normal as possible. The approaches taken to describe the glucose metabolism were discrete-time and continuous-time models on input-output form and statespace form, while the blood glucose short-term predictors, i.e., up to 120 minutes ahead, used ARX-, ARMAX- and subspace-based prediction

    Prediction-Coherent LSTM-based Recurrent Neural Network for Safer Glucose Predictions in Diabetic People

    Full text link
    In the context of time-series forecasting, we propose a LSTM-based recurrent neural network architecture and loss function that enhance the stability of the predictions. In particular, the loss function penalizes the model, not only on the prediction error (mean-squared error), but also on the predicted variation error. We apply this idea to the prediction of future glucose values in diabetes, which is a delicate task as unstable predictions can leave the patient in doubt and make him/her take the wrong action, threatening his/her life. The study is conducted on type 1 and type 2 diabetic people, with a focus on predictions made 30-minutes ahead of time. First, we confirm the superiority, in the context of glucose prediction, of the LSTM model by comparing it to other state-of-the-art models (Extreme Learning Machine, Gaussian Process regressor, Support Vector Regressor). Then, we show the importance of making stable predictions by smoothing the predictions made by the models, resulting in an overall improvement of the clinical acceptability of the models at the cost in a slight loss in prediction accuracy. Finally, we show that the proposed approach, outperforms all baseline results. More precisely, it trades a loss of 4.3\% in the prediction accuracy for an improvement of the clinical acceptability of 27.1\%. When compared to the moving average post-processing method, we show that the trade-off is more efficient with our approach

    Utility of big data in predicting short-term blood glucose levels in type 1 diabetes mellitus through machine learning techniques

    Get PDF
    Machine learning techniques combined with wearable electronics can deliver accurate short-term blood glucose level prediction models. These models can learn personalized glucose–insulin dynamics based on the sensor data collected by monitoring several aspects of the physiological condition and daily activity of an individual. Until now, the prevalent approach for developing data-driven prediction models was to collect as much data as possible to help physicians and patients optimally adjust therapy. The objective of this work was to investigate the minimum data variety, volume, and velocity required to create accurate person-centric short-term prediction models. We developed a series of these models using different machine learning time series forecasting techniques suitable for execution within a wearable processor. We conducted an extensive passive patient monitoring study in real-world conditions to build an appropriate data set. The study involved a subset of type 1 diabetic subjects wearing a flash glucose monitoring system. We comparatively and quantitatively evaluated the performance of the developed data-driven prediction models and the corresponding machine learning techniques. Our results indicate that very accurate short-term prediction can be achieved by only monitoring interstitial glucose data over a very short time period and using a low sampling frequency. The models developed can predict glucose levels within a 15-min horizon with an average error as low as 15.43 mg/dL using only 24 historic values collected within a period of sex hours, and by increasing the sampling frequency to include 72 values, the average error is reduced to 10.15 mg/dL. Our prediction models are suitable for execution within a wearable device, requiring the minimum hardware requirements while at simultaneously achieving very high prediction accuracy.The authors would like to thank to the Endocrinology Department of the Morales Meseguer and Virgen de la Arrixaca hospitals of the city of Murcia (Spain). This work was sponsored by the Spanish Ministry of Economy and Competitiveness through 387 the PERSEIDES (ref. TIN2017-86885-R) and CHIST-ERA (ref. PCIN-2016-010) projects; by MINECO grant BES-2015-071956, and by the European Comission through the H2020-ENTROPY-649849 EU Project

    Diabetes Mellitus Glucose Prediction by Linear and Bayesian Ensemble Modeling

    Get PDF
    Diabetes Mellitus is a chronic disease of impaired blood glucose control due to degraded or absent bodily-specific insulin production, or utilization. To the affected, this in many cases implies relying on insulin injections and blood glucose measurements, in order to keep the blood glucose level within acceptable limits. Risks of developing short- and long-term complications, due to both too high and too low blood glucose concentrations are severalfold, and, generally, the glucose dynamics are not easy too fully comprehend for the affected individual—resulting in poor glucose control. To reduce the burden this implies to the patient and society, in terms of physiological and monetary costs, different technical solutions, based on closed or semi-closed loop blood glucose control, have been suggested. To this end, this thesis investigates simplified linear and merged models of glucose dynamics for the purpose of short-term prediction, developed within the EU FP7 DIAdvisor project. These models could, e.g., be used, in a decision support system, to alert the user of future low and high glucose levels, and, when implemented in a control framework, to suggest proactive actions. The simplified models were evaluated on 47 patient data records from the first DIAdvisor trial. Qualitatively physiological correct responses were imposed, and model-based prediction, up to two hours ahead, and specifically for low blood glucose detection, was evaluated. The glucose raising, and lowering effect of meals and insulin were estimated, together with the clinically relevant carbohydrate-to-insulin ratio. The model was further expanded to include the blood-to-interstitial lag, and tested for one patient data set. Finally, a novel algorithm for merging of multiple prediction models was developed and validated on both artificial data and 12 datasets from the second DIAdvisor trial
    • 

    corecore