91 research outputs found
Dilated Recurrent Neural Networks for Glucose Forecasting in Type 1 Diabetes
Diabetes is a chronic disease affecting 415 million people worldwide. People with type 1 diabetes mellitus (T1DM) need to self-administer insulin to maintain blood glucose (BG) levels in a normal range, which is usually a very challenging task. Developing a reliable glucose forecasting model would have a profound impact on diabetes management, since it could provide predictive glucose alarms or insulin suspension at low-glucose for hypoglycemia minimisation. Recently, deep learning has shown great potential in healthcare and medical research for diagnosis, forecasting and decision-making. In this work, we introduce a deep learning model based on a dilated recurrent neural network (DRNN) to provide 30-min forecasts of future glucose levels. Using dilation, the DRNN model gains a much larger receptive field in terms of neurons aiming at capturing long-term dependencies. A transfer learning technique is also applied to make use of the data from multiple subjects. The proposed approach outperforms existing glucose forecasting algorithms, including autoregressive models (ARX), support vector regression (SVR) and conventional neural networks for predicting glucose (NNPG) (e.g. RMSE = NNPG, 22.9 mg/dL; SVR, 21.7 mg/dL; ARX, 20.1 mg/dl; DRNN, 18.9 mg/dL on the OhioT1DM dataset). The results suggest that dilated connections can improve glucose forecasting performance efficiently
Mutual Information Decay Curves and Hyper-Parameter Grid Search Design for Recurrent Neural Architectures
We present an approach to design the grid searches for hyper-parameter
optimization for recurrent neural architectures. The basis for this approach is
the use of mutual information to analyze long distance dependencies (LDDs)
within a dataset. We also report a set of experiments that demonstrate how
using this approach, we obtain state-of-the-art results for DilatedRNNs across
a range of benchmark datasets.Comment: Published at the 27th International Conference on Neural Information
Processing, ICONIP 2020, Bangkok, Thailand, November 18-22, 2020. arXiv admin
note: text overlap with arXiv:1810.0296
Basal Glucose Control in Type 1 Diabetes using Deep Reinforcement Learning: An In Silico Validation
People with Type 1 diabetes (T1D) require regular exogenous infusion of
insulin to maintain their blood glucose concentration in a therapeutically
adequate target range. Although the artificial pancreas and continuous glucose
monitoring have been proven to be effective in achieving closed-loop control,
significant challenges still remain due to the high complexity of glucose
dynamics and limitations in the technology. In this work, we propose a novel
deep reinforcement learning model for single-hormone (insulin) and dual-hormone
(insulin and glucagon) delivery. In particular, the delivery strategies are
developed by double Q-learning with dilated recurrent neural networks. For
designing and testing purposes, the FDA-accepted UVA/Padova Type 1 simulator
was employed. First, we performed long-term generalized training to obtain a
population model. Then, this model was personalized with a small data-set of
subject-specific data. In silico results show that the single and dual-hormone
delivery strategies achieve good glucose control when compared to a standard
basal-bolus therapy with low-glucose insulin suspension. Specifically, in the
adult cohort (n=10), percentage time in target range [70, 180] mg/dL improved
from 77.6% to 80.9% with single-hormone control, and to with
dual-hormone control. In the adolescent cohort (n=10), percentage time in
target range improved from 55.5% to 65.9% with single-hormone control, and to
78.8% with dual-hormone control. In all scenarios, a significant decrease in
hypoglycemia was observed. These results show that the use of deep
reinforcement learning is a viable approach for closed-loop glucose control in
T1D
Using Regular Languages to Explore the Representational Capacity of Recurrent Neural Architectures
The presence of Long Distance Dependencies (LDDs) in sequential data poses
significant challenges for computational models. Various recurrent neural
architectures have been designed to mitigate this issue. In order to test these
state-of-the-art architectures, there is growing need for rich benchmarking
datasets. However, one of the drawbacks of existing datasets is the lack of
experimental control with regards to the presence and/or degree of LDDs. This
lack of control limits the analysis of model performance in relation to the
specific challenge posed by LDDs. One way to address this is to use synthetic
data having the properties of subregular languages. The degree of LDDs within
the generated data can be controlled through the k parameter, length of the
generated strings, and by choosing appropriate forbidden strings. In this
paper, we explore the capacity of different RNN extensions to model LDDs, by
evaluating these models on a sequence of SPk synthesized datasets, where each
subsequent dataset exhibits a longer degree of LDD. Even though SPk are simple
languages, the presence of LDDs does have significant impact on the performance
of recurrent neural architectures, thus making them prime candidate in
benchmarking tasks.Comment: International Conference of Artificial Neural Networks (ICANN) 201
Blood glucose prediction for type 1 diabetes using generative adversarial networks
Maintaining blood glucose in a target range is essential for people living with Type 1 diabetes in order to avoid excessive periods in hypoglycemia and hyperglycemia which can result in severe complications. Accurate blood glucose prediction can reduce this risk and enhance early interventions to improve diabetes management. However, due to the complex nature of glucose metabolism and the various lifestyle related factors which can disrupt this, diabetes management still remains challenging. In this work we propose a novel deep learning model to predict future BG levels based on the historical continuous glucose monitoring measurements, meal ingestion, and insulin delivery. We adopt a modified architecture of the generative adversarial network that comprises of a generator and a discriminator. The generator computes the BG predictions by a recurrent neural network with gated recurrent units, and the auxiliary discriminator employs a one-dimensional convolutional neural network to distinguish between the predictive and real BG values. Two modules are trained in an adversarial process with a combination of loss. The experiments were conducted using the OhioT1DM dataset that contains the data of six T1D contributors over 40 days. The proposed algorithm achieves an average root mean square error (RMSE) of 18.34 ± 0.17 mg/dL with a mean absolute error (MAE) of 13.37 ± 0.18 mg/dL for the 30-minute prediction horizon (PH) and an average RMSE of 32.31 ± 0.46 mg/dL with a MAE of 24.20 ± 0.42 for the 60-minute PH. The results are compared for clinical relevance using the Clarke error grid which confirms the promising performance of the proposed model
- …