70 research outputs found
Forecasting Response to Treatment with Deep Learning and Pharmacokinetic Priors
Forecasting healthcare time series is crucial for early detection of adverse
outcomes and for patient monitoring. Forecasting, however, can be difficult in
practice due to noisy and intermittent data. The challenges are often
exacerbated by change points induced via extrinsic factors, such as the
administration of medication. We propose a novel encoder that informs deep
learning models of the pharmacokinetic effects of drugs to allow for accurate
forecasting of time series affected by treatment. We showcase the effectiveness
of our approach in a task to forecast blood glucose using both realistically
simulated and real-world data. Our pharmacokinetic encoder helps deep learning
models surpass baselines by approximately 11% on simulated data and 8% on
real-world data. The proposed approach can have multiple beneficial
applications in clinical practice, such as issuing early warnings about
unexpected treatment responses, or helping to characterize patient-specific
treatment effects in terms of drug absorption and elimination characteristics
Blood Glucose Level Prediction: A Graph-based Explainable Method with Federated Learning
In the UK, approximately 400,000 people with type 1 diabetes (T1D) rely on
insulin delivery due to insufficient pancreatic insulin production. Managing
blood glucose (BG) levels is crucial, with continuous glucose monitoring (CGM)
playing a key role. CGM, tracking BG every 5 minutes, enables effective blood
glucose level prediction (BGLP) by considering factors like carbohydrate intake
and insulin delivery.
Recent research has focused on developing sequential models for BGLP using
historical BG data, incorporating additional attributes such as carbohydrate
intake, insulin delivery, and time. These methods have shown notable success in
BGLP, with some providing temporal explanations. However, they often lack clear
correlations between attributes and their impact on BGLP. Additionally, some
methods raise privacy concerns by aggregating participant data to learn
population patterns.
Addressing these limitations, we introduced a graph attentive memory (GAM)
model, combining a graph attention network (GAT) with a gated recurrent unit
(GRU). GAT applies graph attention to model attribute correlations, offering
transparent, dynamic attribute relationships. Attention weights dynamically
gauge attribute significance over time. To ensure privacy, we employed
federated learning (FL), facilitating secure population pattern analysis.
Our method was validated using the OhioT1DM'18 and OhioT1DM'20 datasets from
12 participants, focusing on 6 key attributes. We demonstrated our model's
stability and effectiveness through hyperparameter impact analysis
Dilated Recurrent Neural Networks for Glucose Forecasting in Type 1 Diabetes
Diabetes is a chronic disease affecting 415 million people worldwide. People with type 1 diabetes mellitus (T1DM) need to self-administer insulin to maintain blood glucose (BG) levels in a normal range, which is usually a very challenging task. Developing a reliable glucose forecasting model would have a profound impact on diabetes management, since it could provide predictive glucose alarms or insulin suspension at low-glucose for hypoglycemia minimisation. Recently, deep learning has shown great potential in healthcare and medical research for diagnosis, forecasting and decision-making. In this work, we introduce a deep learning model based on a dilated recurrent neural network (DRNN) to provide 30-min forecasts of future glucose levels. Using dilation, the DRNN model gains a much larger receptive field in terms of neurons aiming at capturing long-term dependencies. A transfer learning technique is also applied to make use of the data from multiple subjects. The proposed approach outperforms existing glucose forecasting algorithms, including autoregressive models (ARX), support vector regression (SVR) and conventional neural networks for predicting glucose (NNPG) (e.g. RMSE = NNPG, 22.9 mg/dL; SVR, 21.7 mg/dL; ARX, 20.1 mg/dl; DRNN, 18.9 mg/dL on the OhioT1DM dataset). The results suggest that dilated connections can improve glucose forecasting performance efficiently
Transform Diabetes - Harnessing Transformer-Based Machine Learning and Layered Ensemble with Enhanced Training for Improved Glucose Prediction.
Type 1 diabetes is a common chronic disease characterized by the body’s inability to regulate the blood glucose level, leading to severe health consequences if not handled manually. Accurate blood glucose level predictions can enable better disease management and inform subsequent treatment decisions. However, predicting future blood glucose levels is a complex problem due to the inherent complexity and variability of the human body.
This thesis investigates using a Transformer model to outperform a state-of-the-art Convolutional Recurrent Neural Network model by forecasting blood glucose levels on the same dataset. The problem is structured, and the data is preprocessed as a multivariate multi-step time series. A unique Layered Ensemble technique that Enhances the Training of the final model is introduced. This technique manages missing data and counters potential issues from other techniques by employing both a Long Short-Term Memory model and a Transformer model together. The experimental results show that this novel ensemble technique reduces the root mean squared error by approximately 14.28% when predicting the blood glucose level 30 minutes in the future compared to the state-of-the-art model. This improvement highlights the potential of this approach to assist diabetes patients with effective disease management
GluGAN: Generating Personalized Glucose Time Series Using Generative Adversarial Networks
Time series data generated by continuous glucose monitoring sensors offer unparalleled opportunities for developing data-driven approaches, especially deep learning-based models, in diabetes management. Although these approaches have achieved state-of-the-art performance in various fields such as glucose prediction in type 1 diabetes (T1D), challenges remain in the acquisition of large-scale individual data for personalized modeling due to the elevated cost of clinical trials and data privacy regulations. In this work, we introduce GluGAN, a framework specifically designed for generating personalized glucose time series based on generative adversarial networks (GANs). Employing recurrent neural network (RNN) modules, the proposed framework uses a combination of unsupervised and supervised training to learn temporal dynamics in latent spaces. Aiming to assess the quality of synthetic data, we apply clinical metrics, distance scores, and discriminative and predictive scores computed by post-hoc RNNs in evaluation. Across three clinical datasets with 47 T1D subjects (including one publicly available and two proprietary datasets), GluGAN achieved better performance for all the considered metrics when compared with four baseline GAN models. The performance of data augmentation is evaluated by three machine learning-based glucose predictors. Using the training sets augmented by GluGAN significantly reduced the root mean square error for the predictors over 30 and 60-minute horizons. The results suggest that GluGAN is an effective method in generating high-quality synthetic glucose time series and has the potential to be used for evaluating the effectiveness of automated insulin delivery algorithms and as a digital twin to substitute for pre-clinical trials
- …