3,424 research outputs found

    Deep learning methods for improving diabetes management tools

    Get PDF
    Diabetes is a chronic disease that is characterised by a lack of regulation of blood glucose concentration in the body, and thus elevated blood glucose levels. Consequently, affected individuals can experience extreme variations in their blood glucose levels with exogenous insulin treatment. This has associated debilitating short-term and long-term complications that affect quality of life and can result in death in the worst instance. The development of technologies such as glucose meters and, more recently, continuous glucose monitors have offered the opportunity to develop systems towards improving clinical outcomes for individuals with diabetes through better glucose control. Data-driven methods can enable the development of the next generation of diabetes management tools focused on i) informativeness ii) safety and iii) easing the burden of management. This thesis aims to propose deep learning methods for improving the functionality of the variety of diabetes technology tools available for self-management. In the pursuit of the aforementioned goals, a number of deep learning methods are developed and geared towards improving the functionality of the existing diabetes technology tools, generally classified as i) self-monitoring of blood glucose ii) decision support systems and iii) artificial pancreas. These frameworks are primarily based on the prediction of glucose concentration levels. The first deep learning framework we propose is geared towards improving the artificial pancreas and decision support systems that rely on continuous glucose monitors. We first propose a convolutional recurrent neural network (CRNN) in order to forecast the glucose concentration levels over both short-term and long-term horizons. The predictive accuracy of this model outperforms those of traditional data-driven approaches. The feasibility of this proposed approach for ambulatory use is then demonstrated with the implementation of a decision support system on a smartphone application. We further extend CRNNs to the multitask setting to explore the effectiveness of leveraging population data for developing personalised models with limited individual data. We show that this enables earlier deployment of applications without significantly compromising performance and safety. The next challenge focuses on easing the burden of management by proposing a deep learning framework for automatic meal detection and estimation. The deep learning framework presented employs multitask learning and quantile regression to safely detect and estimate the size of unannounced meals with high precision. We also demonstrate that this facilitates automated insulin delivery for the artificial pancreas system, improving glycaemic control without significantly increasing the risk or incidence of hypoglycaemia. Finally, the focus shifts to improving self-monitoring of blood glucose (SMBG) with glucose meters. We propose an uncertainty-aware deep learning model based on a joint Gaussian Process and deep learning framework to provide end users with more dynamic and continuous information similar to continuous glucose sensors. Consequently, we show significant improvement in hyperglycaemia detection compared to the standard SMBG. We hope that through these methods, we can achieve a more equitable improvement in usability and clinical outcomes for individuals with diabetes.Open Acces

    A deep learning approach to diabetic blood glucose prediction

    Full text link
    We consider the question of 30-minute prediction of blood glucose levels measured by continuous glucose monitoring devices, using clinical data. While most studies of this nature deal with one patient at a time, we take a certain percentage of patients in the data set as training data, and test on the remainder of the patients; i.e., the machine need not re-calibrate on the new patients in the data set. We demonstrate how deep learning can outperform shallow networks in this example. One novelty is to demonstrate how a parsimonious deep representation can be constructed using domain knowledge

    A machine-learning approach to predict postprandial hypoglycemia

    Get PDF
    Background For an effective artificial pancreas (AP) system and an improved therapeutic intervention with continuous glucose monitoring (CGM), predicting the occurrence of hypoglycemia accurately is very important. While there have been many studies reporting successful algorithms for predicting nocturnal hypoglycemia, predicting postprandial hypoglycemia still remains a challenge due to extreme glucose fluctuations that occur around mealtimes. The goal of this study is to evaluate the feasibility of easy-to-use, computationally efficient machine-learning algorithm to predict postprandial hypoglycemia with a unique feature set. Methods We use retrospective CGM datasets of 104 people who had experienced at least one hypoglycemia alert value during a three-day CGM session. The algorithms were developed based on four machine learning models with a unique data-driven feature set: a random forest (RF), a support vector machine using a linear function or a radial basis function, a K-nearest neighbor, and a logistic regression. With 5-fold cross-subject validation, the average performance of each model was calculated to compare and contrast their individual performance. The area under a receiver operating characteristic curve (AUC) and the F1 score were used as the main criterion for evaluating the performance. Results In predicting a hypoglycemia alert value with a 30-min prediction horizon, the RF model showed the best performance with the average AUC of 0.966, the average sensitivity of 89.6%, the average specificity of 91.3%, and the average F1 score of 0.543. In addition, the RF showed the better predictive performance for postprandial hypoglycemic events than other models. Conclusion In conclusion, we showed that machine-learning algorithms have potential in predicting postprandial hypoglycemia, and the RF model could be a better candidate for the further development of postprandial hypoglycemia prediction algorithm to advance the CGM technology and the AP technology further.11Ysciescopu

    A stacked long short-term memory approach for predictive blood glucose monitoring in women with gestational diabetes mellitus

    Get PDF
    Gestational diabetes mellitus (GDM) is a subtype of diabetes that develops during pregnancy. Managing blood glucose (BG) within the healthy physiological range can reduce clinical complications for women with gestational diabetes. The objectives of this study are to (1) develop benchmark glucose prediction models with long short-term memory (LSTM) recurrent neural network models using time-series data collected from the GDm-Health platform, (2) compare the prediction accuracy with published results, and (3) suggest an optimized clinical review schedule with the potential to reduce the overall number of blood tests for mothers with stable and within-range glucose measurements. A total of 190,396 BG readings from 1110 patients were used for model development, validation and testing under three different prediction schemes: 7 days of BG readings to predict the next 7 or 14 days and 14 days to predict 14 days. Our results show that the optimized BG schedule based on a 7-day observational window to predict the BG of the next 14 days achieved the accuracies of the root mean square error (RMSE) = 0.958 ± 0.007, 0.876 ± 0.003, 0.898 ± 0.003, 0.622 ± 0.003, 0.814 ± 0.009 and 0.845 ± 0.005 for the after-breakfast, after-lunch, after-dinner, before-breakfast, before-lunch and before-dinner predictions, respectively. This is the first machine learning study that suggested an optimized blood glucose monitoring frequency, which is 7 days to monitor the next 14 days based on the accuracy of blood glucose prediction. Moreover, the accuracy of our proposed model based on the fingerstick blood glucose test is on par with the prediction accuracies compared with the benchmark performance of one-hour prediction models using continuous glucose monitoring (CGM) readings. In conclusion, the stacked LSTM model is a promising approach for capturing the patterns in time-series data, resulting in accurate predictions of BG levels. Using a deep learning model with routine fingerstick glucose collection is a promising, predictable and low-cost solution for BG monitoring for women with gestational diabetes

    Continuous glucose monitoring sensors: Past, present and future algorithmic challenges

    Get PDF
    Continuous glucose monitoring (CGM) sensors are portable devices that allow measuring and visualizing the glucose concentration in real time almost continuously for several days and are provided with hypo/hyperglycemic alerts and glucose trend information. CGM sensors have revolutionized Type 1 diabetes (T1D) management, improving glucose control when used adjunctively to self-monitoring blood glucose systems. Furthermore, CGM devices have stimulated the development of applications that were impossible to create without a continuous-time glucose signal, e.g., real-time predictive alerts of hypo/hyperglycemic episodes based on the prediction of future glucose concentration, automatic basal insulin attenuation methods for hypoglycemia prevention, and the artificial pancreas. However, CGM sensors’ lack of accuracy and reliability limited their usability in the clinical practice, calling upon the academic community for the development of suitable signal processing methods to improve CGM performance. The aim of this paper is to review the past and present algorithmic challenges of CGM sensors, to show how they have been tackled by our research group, and to identify the possible future ones

    GluGAN: Generating Personalized Glucose Time Series Using Generative Adversarial Networks

    Get PDF
    Time series data generated by continuous glucose monitoring sensors offer unparalleled opportunities for developing data-driven approaches, especially deep learning-based models, in diabetes management. Although these approaches have achieved state-of-the-art performance in various fields such as glucose prediction in type 1 diabetes (T1D), challenges remain in the acquisition of large-scale individual data for personalized modeling due to the elevated cost of clinical trials and data privacy regulations. In this work, we introduce GluGAN, a framework specifically designed for generating personalized glucose time series based on generative adversarial networks (GANs). Employing recurrent neural network (RNN) modules, the proposed framework uses a combination of unsupervised and supervised training to learn temporal dynamics in latent spaces. Aiming to assess the quality of synthetic data, we apply clinical metrics, distance scores, and discriminative and predictive scores computed by post-hoc RNNs in evaluation. Across three clinical datasets with 47 T1D subjects (including one publicly available and two proprietary datasets), GluGAN achieved better performance for all the considered metrics when compared with four baseline GAN models. The performance of data augmentation is evaluated by three machine learning-based glucose predictors. Using the training sets augmented by GluGAN significantly reduced the root mean square error for the predictors over 30 and 60-minute horizons. The results suggest that GluGAN is an effective method in generating high-quality synthetic glucose time series and has the potential to be used for evaluating the effectiveness of automated insulin delivery algorithms and as a digital twin to substitute for pre-clinical trials

    Dilated Recurrent Neural Networks for Glucose Forecasting in Type 1 Diabetes

    Get PDF
    Diabetes is a chronic disease affecting 415 million people worldwide. People with type 1 diabetes mellitus (T1DM) need to self-administer insulin to maintain blood glucose (BG) levels in a normal range, which is usually a very challenging task. Developing a reliable glucose forecasting model would have a profound impact on diabetes management, since it could provide predictive glucose alarms or insulin suspension at low-glucose for hypoglycemia minimisation. Recently, deep learning has shown great potential in healthcare and medical research for diagnosis, forecasting and decision-making. In this work, we introduce a deep learning model based on a dilated recurrent neural network (DRNN) to provide 30-min forecasts of future glucose levels. Using dilation, the DRNN model gains a much larger receptive field in terms of neurons aiming at capturing long-term dependencies. A transfer learning technique is also applied to make use of the data from multiple subjects. The proposed approach outperforms existing glucose forecasting algorithms, including autoregressive models (ARX), support vector regression (SVR) and conventional neural networks for predicting glucose (NNPG) (e.g. RMSE = NNPG, 22.9 mg/dL; SVR, 21.7 mg/dL; ARX, 20.1 mg/dl; DRNN, 18.9 mg/dL on the OhioT1DM dataset). The results suggest that dilated connections can improve glucose forecasting performance efficiently
    • …
    corecore