136 research outputs found
The exact -function in integrable -deformed theories
By employing CFT techniques, we show how to compute in the context of
\lambda-deformations of current algebras and coset CFTs the exact in the
deformation parameters C-function for a wide class of integrable theories that
interpolate between a UV and an IR point. We explicitly consider RG flows for
integrable deformations of left-right asymmetric current algebras and coset
CFTs. In all cases, the derived exact C-functions obey all the properties
asserted by Zamolodchikov's c-theorem in two-dimensions.Comment: v1: 1+15 pages, Latex, v2: PLB version, v3: Correcting a typo in
footnote
Offline Deep Reinforcement Learning and Off-Policy Evaluation for Personalized Basal Insulin Control in Type 1 Diabetes
Recent advancements in hybrid closed-loop systems, also known as the artificial pancreas (AP), have been shown to optimize glucose control and reduce the self-management burdens for people living with type 1 diabetes (T1D). AP systems can adjust the basal infusion rates of insulin pumps, facilitated by real-time communication with continuous glucose monitoring. Empowered by deep neural networks, deep reinforcement learning (DRL) has introduced new paradigms of basal insulin control algorithms. However, all the existing DRL-based AP controllers require a large number of random online interactions between the agent and environment. While this can be validated in T1D simulators, it becomes impractical in real-world clinical settings. To this end, we propose an offline DRL framework that can develop and validate models for basal insulin control entirely offline. It comprises a DRL model based on the twin delayed deep deterministic policy gradient and behavior cloning, as well as off-policy evaluation (OPE) using fitted Q evaluation. We evaluated the proposed framework on an in silico dataset containing 10 virtual adults and 10 virtual adolescents, generated by the UVA/Padova T1D simulator, and the OhioT1DM dataset, a clinical dataset with 12 real T1D subjects. The performance on the in silico dataset shows that the offline DRL algorithm significantly increased time in range while reducing time below range and time above range for both adult and adolescent groups. The high Spearman's rank correlation coefficients between actual and estimated policy values indicate the accurate estimation made by the OPE. Then, we used the OPE to estimate model performance on the clinical dataset, where a notable increase in policy values was observed for each subject. The results demonstrate that the proposed framework is a viable and safe method for improving personalized basal insulin control in T1D
Basal Glucose Control in Type 1 Diabetes using Deep Reinforcement Learning: An In Silico Validation
People with Type 1 diabetes (T1D) require regular exogenous infusion of
insulin to maintain their blood glucose concentration in a therapeutically
adequate target range. Although the artificial pancreas and continuous glucose
monitoring have been proven to be effective in achieving closed-loop control,
significant challenges still remain due to the high complexity of glucose
dynamics and limitations in the technology. In this work, we propose a novel
deep reinforcement learning model for single-hormone (insulin) and dual-hormone
(insulin and glucagon) delivery. In particular, the delivery strategies are
developed by double Q-learning with dilated recurrent neural networks. For
designing and testing purposes, the FDA-accepted UVA/Padova Type 1 simulator
was employed. First, we performed long-term generalized training to obtain a
population model. Then, this model was personalized with a small data-set of
subject-specific data. In silico results show that the single and dual-hormone
delivery strategies achieve good glucose control when compared to a standard
basal-bolus therapy with low-glucose insulin suspension. Specifically, in the
adult cohort (n=10), percentage time in target range [70, 180] mg/dL improved
from 77.6% to 80.9% with single-hormone control, and to with
dual-hormone control. In the adolescent cohort (n=10), percentage time in
target range improved from 55.5% to 65.9% with single-hormone control, and to
78.8% with dual-hormone control. In all scenarios, a significant decrease in
hypoglycemia was observed. These results show that the use of deep
reinforcement learning is a viable approach for closed-loop glucose control in
T1D
GluGAN: Generating Personalized Glucose Time Series Using Generative Adversarial Networks
Time series data generated by continuous glucose monitoring sensors offer unparalleled opportunities for developing data-driven approaches, especially deep learning-based models, in diabetes management. Although these approaches have achieved state-of-the-art performance in various fields such as glucose prediction in type 1 diabetes (T1D), challenges remain in the acquisition of large-scale individual data for personalized modeling due to the elevated cost of clinical trials and data privacy regulations. In this work, we introduce GluGAN, a framework specifically designed for generating personalized glucose time series based on generative adversarial networks (GANs). Employing recurrent neural network (RNN) modules, the proposed framework uses a combination of unsupervised and supervised training to learn temporal dynamics in latent spaces. Aiming to assess the quality of synthetic data, we apply clinical metrics, distance scores, and discriminative and predictive scores computed by post-hoc RNNs in evaluation. Across three clinical datasets with 47 T1D subjects (including one publicly available and two proprietary datasets), GluGAN achieved better performance for all the considered metrics when compared with four baseline GAN models. The performance of data augmentation is evaluated by three machine learning-based glucose predictors. Using the training sets augmented by GluGAN significantly reduced the root mean square error for the predictors over 30 and 60-minute horizons. The results suggest that GluGAN is an effective method in generating high-quality synthetic glucose time series and has the potential to be used for evaluating the effectiveness of automated insulin delivery algorithms and as a digital twin to substitute for pre-clinical trials
GluNet: A Deep Learning Framework For Accurate Glucose Forecasting
For people with Type 1 diabetes (T1D), forecasting of \red{blood glucose (BG)} can be used to effectively avoid hyperglycemia, hypoglycemia and associated complications. The latest continuous glucose monitoring (CGM) technology allows people to observe glucose in real-time. However, an accurate glucose forecast remains a challenge. In this work, we introduce GluNet, a framework that leverages on a personalized deep neural network to predict the probabilistic distribution of short-term (30-60 minutes) future CGM measurements for subjects with T1D based on their historical data including glucose measurements, meal information, insulin doses, and other factors. It adopts the latest deep learning techniques consisting of four components: data pre-processing, label transform/recover, multi-layers of dilated convolution neural network (CNN), and post-processing. The method is evaluated in−silico for both adult and adolescent subjects. The results show significant improvements over existing methods in the literature through a comprehensive comparison in terms of root mean square error (RMSE) (8.88 ± 0.77 mg/dL) with short time lag (0.83 ± 0.40 minutes) for prediction horizons (PH) = 30 mins (minutes), and RMSE (19.90 ± 3.17 mg/dL) with time lag (16.43 ± 4.07 mins) for PH = 60 mins for virtual adult subjects. In addition, GluNet is also tested on two clinical data sets. Results show that it achieves an RMSE (19.28 ± 2.76 mg/dL) with time lag (8.03 ± 4.07 mins) for PH = 30 mins and an RMSE (31.83 ± 3.49 mg/dL) with time lag (17.78 ± 8.00 mins) for PH = 60 mins. These are the best reported results for glucose forecasting when compared with other methods including the neural network for predicting glucose (NNPG), the support vector regression (SVR), the latent variable with exogenous input (LVX), and the auto regression with exogenous input (ARX) algorithm
An electroplated Ag/AgCl quasi-reference electrode based on CMOS top-metal for electrochemical sensing
The integration and mass-production of reference electrodes for CMOS-based electrochemical sensing systems
pose a challenge for the accessibility and commercial-viability of such devices. In this paper, a method of
electroplating an Ag/AgCl quasi-reference electrode using CMOS top-metal as a base is presented for the first
time. The aluminium bond-pad of a CMOS microchip was zincated, an electroless nickel immersion gold layer
applied, and a thick silver layer electroplated and chemically chlorinated. The resulting reference electrode was
able to provide a stable potential with a drift rate of 0.3 mV/h for up to 18 h. This validates the approach of a
fully electroplated bond-pad reference electrode, which offers simplified post-processing and greater scalability
of production. Further work towards an entirely electroless process is envisage
- …