2,823 research outputs found

    Multi-lag stacking for blood glucose level prediction

    Get PDF
    This work investigates blood glucose level prediction for type 1 diabetes in two horizons of 30 and 60 minutes. Initially, three conventional regression tools—partial least square regression (PLSR), multilayer perceptron, and long short-term memory—are deployed to create predictive models. They are trained once on 30 minutes and once on 60 minutes of historical data resulting in six basic models for each prediction horizon. A collection of these models are then set as base-learners to develop three stacking systems; two uni-lag and one multi-lag. One of the uni-lag systems uses the three basic models trained on 30 minutes of lag data; the other uses those trained on 60 minutes. The multi-lag system, on the other hand, leverages the basic models trained on both lags. All three stacking systems deploy a PLSR as meta-learner. The results obtained show: i) the stacking systems outperform the basic models, ii) among the stacking systems, the multi-lag shows the best predictive performance with a root mean square error of 19.01 mg/dl and 33.37 mg/dl for the prediction horizon of 30 and 60 minutes, respectively

    Data fusion of activity and CGM for predicting blood glucose levels

    Get PDF
    This work suggests two methods—both relying on stacked regression and data fusion of CGM and activity—to predict the blood glucose level of patients with type 1 diabetes. Method 1 uses histories of CGM data appended with the average of activity data in the same histories to train three base regressions: a multilayer perceptron, a long short- term memory, and a partial least squares regression. In Method 2, histories of CGM and activity data are used separately to train the same base regressions. In both methods, the predictions from the base regressions are used as features to create a combined model. This model is then used to make the final predictions. The results obtained show the effectiveness of both methods. Method 1 provides slightly better results

    Blood Glucose Forecasting using LSTM Variants under the Context of Open Source Artificial Pancreas System

    Get PDF
    High accuracy of blood glucose prediction over the long term is essential for preventative diabetes management. The emerging closed-loop insulin delivery system such as the artificial pancreas system (APS) provides opportunities for improved glycaemic control for patients with type 1 diabetes. Existing blood glucose studies are proven effective only within 30 minutes but the accuracy deteriorates drastically when the prediction horizon increases to 45 minutes and 60 minutes. Deep learning, especially for long short term memory (LSTM) and its variants have recently been applied in various areas to achieve state-of-the-art results in tasks with complex time series data. In this study, we present deep LSTM based models that are capable of forecasting long term blood glucose levels with improved prediction and clinical accuracy. We evaluate our approach using 20 cases(878,000 glucose values) from Open Source Artificial Pancreas System (OpenAPS). On 30-minutes and 45-minutes prediction, our Stacked-LSTM achieved the best performance with Root-Mean-Square-Error (RMSE) marks 11.96 & 15.81 and Clark-Grid-ZoneA marks 0.887 & 0.784. In terms of 60-minutes prediction, our ConvLSTM has the best performance with RMSE = 19.6 and Clark-Grid-ZoneA=0.714. Our models outperform existing methods in both prediction and clinical accuracy. This research can hopefully support patients with type 1 diabetes to better manage their behavior in a more preventative way and can be used in future real APS context

    Glycemic-aware metrics and oversampling techniques for predicting blood glucose levels using machine learning.

    Get PDF
    Techniques using machine learning for short term blood glucose level prediction in patients with Type 1 Diabetes are investigated. This problem is significant for the development of effective artificial pancreas technology so accurate alerts (e.g. hypoglycemia alarms) and other forecasts can be generated. It is shown that two factors must be considered when selecting the best machine learning technique for blood glucose level regression: (i) the regression model performance metrics being used to select the model, and (ii) the preprocessing techniques required to account for the imbalanced time spent by patients in different portions of the glycemic range. Using standard benchmark data, it is demonstrated that different regression model/preprocessing technique combinations exhibit different accuracies depending on the glycemic subrange under consideration. Therefore technique selection depends on the type of alert required. Specific findings are that a linear Support Vector Regression-based model, trained with normal as well as polynomial features, is best for blood glucose level forecasting in the normal and hyperglycemic ranges while a Multilayer Perceptron trained on oversampled data is ideal for predictions in the hypoglycemic range

    Transform Diabetes - Harnessing Transformer-Based Machine Learning and Layered Ensemble with Enhanced Training for Improved Glucose Prediction.

    Get PDF
    Type 1 diabetes is a common chronic disease characterized by the body’s inability to regulate the blood glucose level, leading to severe health consequences if not handled manually. Accurate blood glucose level predictions can enable better disease management and inform subsequent treatment decisions. However, predicting future blood glucose levels is a complex problem due to the inherent complexity and variability of the human body. This thesis investigates using a Transformer model to outperform a state-of-the-art Convolutional Recurrent Neural Network model by forecasting blood glucose levels on the same dataset. The problem is structured, and the data is preprocessed as a multivariate multi-step time series. A unique Layered Ensemble technique that Enhances the Training of the final model is introduced. This technique manages missing data and counters potential issues from other techniques by employing both a Long Short-Term Memory model and a Transformer model together. The experimental results show that this novel ensemble technique reduces the root mean squared error by approximately 14.28% when predicting the blood glucose level 30 minutes in the future compared to the state-of-the-art model. This improvement highlights the potential of this approach to assist diabetes patients with effective disease management

    Transform Diabetes - Harnessing Transformer-Based Machine Learning and Layered Ensemble with Enhanced Training for Improved Glucose Prediction.

    Get PDF
    Type 1 diabetes is a common chronic disease characterized by the body’s inability to regulate the blood glucose level, leading to severe health consequences if not handled manually. Accurate blood glucose level predictions can enable better disease management and inform subsequent treatment decisions. However, predicting future blood glucose levels is a complex problem due to the inherent complexity and variability of the human body. This thesis investigates using a Transformer model to outperform a state-of-the-art Convolutional Recurrent Neural Network model by forecasting blood glucose levels on the same dataset. The problem is structured, and the data is preprocessed as a multivariate multi-step time series. A unique Layered Ensemble technique that Enhances the Training of the final model is introduced. This technique manages missing data and counters potential issues from other techniques by employing both a Long Short-Term Memory model and a Transformer model together. The experimental results show that this novel ensemble technique reduces the root mean squared error by approximately 14.28% when predicting the blood glucose level 30 minutes in the future compared to the state-of-the-art model. This improvement highlights the potential of this approach to assist diabetes patients with effective disease management

    IoMT innovations in diabetes management: Predictive models using wearable data

    Get PDF
    Diabetes Mellitus (DM) represents a metabolic disorder characterized by consistently elevated blood glucose levels due to inadequate pancreatic insulin production. Type 1 DM (DM1) constitutes the insulin-dependent manifestation from disease onset. Effective DM1 management necessitates daily blood glucose monitoring, pattern recognition, and cognitive prediction of future glycemic levels to ascertain the requisite exogenous insulin dosage. Nevertheless, this methodology may prove imprecise and perilous. The advent of groundbreaking developments in information and communication technologies (ICT), encompassing Big Data, the Internet of Medical Things (IoMT), Cloud Computing, and Machine Learning algorithms (ML), has facilitated continuous DM1 management monitoring. This investigation concentrates on IoMT-based methodologies for the unbroken observation of DM1 management, thereby enabling comprehensive characterization of diabetic individuals. Integrating machine learning techniques with wearable technology may yield dependable models for forecasting short-term blood glucose concentrations. The objective of this research is to devise precise person-specific short-term prediction models, utilizing an array of features. To accomplish this, inventive modeling strategies were employed on an extensive dataset comprising glycaemia-related biological attributes gathered from a large-scale passive monitoring initiative involving 40 DM1 patients. The models produced via the Random Forest approach can predict glucose levels within a 30-minute horizon with an average error of 18.60 mg/dL for six-hour data, and 26.21 mg/dL for a 45-minute prediction horizon. These findings have also been corroborated with data from 10 Type 2 DM patients as a proof of concept, thereby demonstrating the potential of IoMT-based methodologies for continuous DM monitoring and management.Funding for open Access charge: Universidad de Málaga / CBUA. Plan Andaluz de Investigación, Desarrollo e Innovación (PAIDI), Junta de Andalucía, Spain. María Campo-Valera is grateful for postdoctoral program Margarita Salas – Spanish Ministry of Universities (financed by European Union – NextGenerationEU). The authors would like to acknowledge project PID2022-137461NB-C32 financed by MCIN/AEI/10.13039/501100011033/FEDER(“Una manera de hacer Europa”), EU

    Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches

    Get PDF
    In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area

    Study on multi-SVM systems and their applications to pattern recognition

    Get PDF
    制度:新 ; 報告番号:甲3136号 ; 学位の種類:博士(工学) ; 授与年月日:2010/7/12 ; 早大学位記番号:新541
    corecore