128 research outputs found

    The exact CC-function in integrable λ\lambda-deformed theories

    Full text link
    By employing CFT techniques, we show how to compute in the context of \lambda-deformations of current algebras and coset CFTs the exact in the deformation parameters C-function for a wide class of integrable theories that interpolate between a UV and an IR point. We explicitly consider RG flows for integrable deformations of left-right asymmetric current algebras and coset CFTs. In all cases, the derived exact C-functions obey all the properties asserted by Zamolodchikov's c-theorem in two-dimensions.Comment: v1: 1+15 pages, Latex, v2: PLB version, v3: Correcting a typo in footnote

    Offline Deep Reinforcement Learning and Off-Policy Evaluation for Personalized Basal Insulin Control in Type 1 Diabetes

    Get PDF
    Recent advancements in hybrid closed-loop systems, also known as the artificial pancreas (AP), have been shown to optimize glucose control and reduce the self-management burdens for people living with type 1 diabetes (T1D). AP systems can adjust the basal infusion rates of insulin pumps, facilitated by real-time communication with continuous glucose monitoring. Empowered by deep neural networks, deep reinforcement learning (DRL) has introduced new paradigms of basal insulin control algorithms. However, all the existing DRL-based AP controllers require a large number of random online interactions between the agent and environment. While this can be validated in T1D simulators, it becomes impractical in real-world clinical settings. To this end, we propose an offline DRL framework that can develop and validate models for basal insulin control entirely offline. It comprises a DRL model based on the twin delayed deep deterministic policy gradient and behavior cloning, as well as off-policy evaluation (OPE) using fitted Q evaluation. We evaluated the proposed framework on an in silico dataset containing 10 virtual adults and 10 virtual adolescents, generated by the UVA/Padova T1D simulator, and the OhioT1DM dataset, a clinical dataset with 12 real T1D subjects. The performance on the in silico dataset shows that the offline DRL algorithm significantly increased time in range while reducing time below range and time above range for both adult and adolescent groups. The high Spearman's rank correlation coefficients between actual and estimated policy values indicate the accurate estimation made by the OPE. Then, we used the OPE to estimate model performance on the clinical dataset, where a notable increase in policy values was observed for each subject. The results demonstrate that the proposed framework is a viable and safe method for improving personalized basal insulin control in T1D

    Basal Glucose Control in Type 1 Diabetes using Deep Reinforcement Learning: An In Silico Validation

    Get PDF
    People with Type 1 diabetes (T1D) require regular exogenous infusion of insulin to maintain their blood glucose concentration in a therapeutically adequate target range. Although the artificial pancreas and continuous glucose monitoring have been proven to be effective in achieving closed-loop control, significant challenges still remain due to the high complexity of glucose dynamics and limitations in the technology. In this work, we propose a novel deep reinforcement learning model for single-hormone (insulin) and dual-hormone (insulin and glucagon) delivery. In particular, the delivery strategies are developed by double Q-learning with dilated recurrent neural networks. For designing and testing purposes, the FDA-accepted UVA/Padova Type 1 simulator was employed. First, we performed long-term generalized training to obtain a population model. Then, this model was personalized with a small data-set of subject-specific data. In silico results show that the single and dual-hormone delivery strategies achieve good glucose control when compared to a standard basal-bolus therapy with low-glucose insulin suspension. Specifically, in the adult cohort (n=10), percentage time in target range [70, 180] mg/dL improved from 77.6% to 80.9% with single-hormone control, and to 85.6%85.6\% with dual-hormone control. In the adolescent cohort (n=10), percentage time in target range improved from 55.5% to 65.9% with single-hormone control, and to 78.8% with dual-hormone control. In all scenarios, a significant decrease in hypoglycemia was observed. These results show that the use of deep reinforcement learning is a viable approach for closed-loop glucose control in T1D

    GluGAN: Generating Personalized Glucose Time Series Using Generative Adversarial Networks

    Get PDF
    Time series data generated by continuous glucose monitoring sensors offer unparalleled opportunities for developing data-driven approaches, especially deep learning-based models, in diabetes management. Although these approaches have achieved state-of-the-art performance in various fields such as glucose prediction in type 1 diabetes (T1D), challenges remain in the acquisition of large-scale individual data for personalized modeling due to the elevated cost of clinical trials and data privacy regulations. In this work, we introduce GluGAN, a framework specifically designed for generating personalized glucose time series based on generative adversarial networks (GANs). Employing recurrent neural network (RNN) modules, the proposed framework uses a combination of unsupervised and supervised training to learn temporal dynamics in latent spaces. Aiming to assess the quality of synthetic data, we apply clinical metrics, distance scores, and discriminative and predictive scores computed by post-hoc RNNs in evaluation. Across three clinical datasets with 47 T1D subjects (including one publicly available and two proprietary datasets), GluGAN achieved better performance for all the considered metrics when compared with four baseline GAN models. The performance of data augmentation is evaluated by three machine learning-based glucose predictors. Using the training sets augmented by GluGAN significantly reduced the root mean square error for the predictors over 30 and 60-minute horizons. The results suggest that GluGAN is an effective method in generating high-quality synthetic glucose time series and has the potential to be used for evaluating the effectiveness of automated insulin delivery algorithms and as a digital twin to substitute for pre-clinical trials

    GluNet: A Deep Learning Framework For Accurate Glucose Forecasting

    Get PDF
    For people with Type 1 diabetes (T1D), forecasting of \red{blood glucose (BG)} can be used to effectively avoid hyperglycemia, hypoglycemia and associated complications. The latest continuous glucose monitoring (CGM) technology allows people to observe glucose in real-time. However, an accurate glucose forecast remains a challenge. In this work, we introduce GluNet, a framework that leverages on a personalized deep neural network to predict the probabilistic distribution of short-term (30-60 minutes) future CGM measurements for subjects with T1D based on their historical data including glucose measurements, meal information, insulin doses, and other factors. It adopts the latest deep learning techniques consisting of four components: data pre-processing, label transform/recover, multi-layers of dilated convolution neural network (CNN), and post-processing. The method is evaluated in−silico for both adult and adolescent subjects. The results show significant improvements over existing methods in the literature through a comprehensive comparison in terms of root mean square error (RMSE) (8.88 ± 0.77 mg/dL) with short time lag (0.83 ± 0.40 minutes) for prediction horizons (PH) = 30 mins (minutes), and RMSE (19.90 ± 3.17 mg/dL) with time lag (16.43 ± 4.07 mins) for PH = 60 mins for virtual adult subjects. In addition, GluNet is also tested on two clinical data sets. Results show that it achieves an RMSE (19.28 ± 2.76 mg/dL) with time lag (8.03 ± 4.07 mins) for PH = 30 mins and an RMSE (31.83 ± 3.49 mg/dL) with time lag (17.78 ± 8.00 mins) for PH = 60 mins. These are the best reported results for glucose forecasting when compared with other methods including the neural network for predicting glucose (NNPG), the support vector regression (SVR), the latent variable with exogenous input (LVX), and the auto regression with exogenous input (ARX) algorithm

    Deep domain adaptation enhances Amplification Curve Analysis for single-channel multiplexing in real-time PCR

    Get PDF
    Data-driven approaches for molecular diagnostics are emerging as an alternative to perform an accurate and inexpensive multi-pathogen detection. A novel technique called Amplification Curve Analysis (ACA) has been recently developed by coupling machine learning and real-time Polymerase Chain Reaction (qPCR) to enable the simultaneous detection of multiple targets in a single reaction well. However, target classification purely relying on the amplification curve shapes currently faces several challenges, such as distribution discrepancies between different data sources of synthetic DNA and clinical samples (i.e., training vs testing). Optimisation of computational models is required to achieve higher performance of ACA classification in multiplex qPCR through the reduction of those discrepancies. Here, we proposed a novel transformer-based conditional domain adversarial network (T-CDAN) to eliminate data distribution differences between the source domain (synthetic DNA data) and the target domain (clinical isolate data). The labelled training data from the source domain and unlabelled testing data from the target domain are fed into the T-CDAN, which learns both domains' information simultaneously. After mapping the inputs into a domain-irrelevant space, T-CDAN removes the feature distribution differences and provides a clearer decision boundary for the classifier, resulting in a more accurate pathogen identification. Evaluation of 198 clinical isolates containing three types of carbapenem-resistant genes ( bla NDM , bla IMP and bla OXA-48 ) illustrates a curve-level accuracy of 93.1% and a sample-level accuracy of 97.0% using T-CDAN, showing an accuracy improvement of 20.9% and 4.9% respectively, compared with previous methods. This research emphasises the importance of deep domain adaptation to enable high-level multiplexing in a single qPCR reaction, providing a solid approach to extend qPCR instruments' capabilities without hardware modification in real-world clinical applications
    corecore