906 research outputs found

    Calibration of continuous glucose monitoring sensors by time-varying models and Bayesian estimation

    Get PDF
    Minimally invasive continuous glucose monitoring (CGM) sensors are wearable medical devices that provide frequent (e.g., 1-5 min sampling rate) real-time measurements of glucose concentration for several consecutive days. This can be of great help in the daily management of diabetes. Most of the CGM systems commercially available today have a wire-based electrochemical sensor, usually placed in the subcutaneous tissue, which measures a "raw" electrical current signal via a glucose-oxidase electrochemical reaction. Observations of the raw electrical signal are frequently revealed by the sensor on a fine, uniformly spaced, time grid. These samples of electrical nature are in real-time converted to interstitial glucose (IG) concentration levels through a calibration process by fitting a few blood glucose (BG) concentration measurements, sparsely collected by the patient through fingerprick. Usually, for coping with such a process, CGM sensor manufacturers employ linear calibration models to approximate, albeit in limited time-intervals, the nonlinear relationship between electrical signal and glucose concentration. Thus, on the one hand, frequent calibrations (e.g., two per day) are required to guarantee a good sensor accuracy. On the other, each calibration requires patients to add uncomfortable extra actions to the many already needed in the routine of diabetes management. The aim of this thesis is to develop new calibration algorithms for minimally invasive CGM sensors able to ensure good sensor accuracy with the minimum number of calibrations. In particular, we propose i) to replace the time-invariant gain and offset conventionally used by the linear calibration models with more sophisticated time-varying functions valid for multiple-day periods, with unknown model parameters for which an a priori statistical description is available from independent training sets; ii) to numerically estimate the calibration model parameters by means of a Bayesian estimation procedure that exploits the a priori information on model parameters in addition to some BG samples sparsely collected by the patient. The thesis is organized in 6 chapters. In Chapter 1, after a background introduction on CGM sensor technologies, the calibration problem is illustrated. Then, some state-of-art calibration techniques are briefly discussed with their open problems, which result in the aims of the thesis illustrated at the end of the chapter. In Chapter 2, the datasets used for the implementation of the calibration techniques are described, together with the performance metrics and the statistical analysis tools which will be employed to assess the quality of the results. In Chapter 3, we illustrate a recently proposed calibration algorithm (Vet- toretti et al., IEEE Trans Biomed Eng 2016), which represents the starting point of the study proposed in this thesis. In particular, we demonstrate that, thanks to the development of a time-varying day-specific Bayesian prior, the algorithm can become able to reduce the calibration frequency from two to one per day. However, the linear calibration model used by the algorithm has domain of validity limited to certain time intervals, not allowing to further reduce calibrations to less then one per day and calling for the development of a new calibration model valid for multiple-day periods like that developed in the remainder of this thesis. In Chapter 4, a novel Bayesian calibration algorithm working in a multi-day framework (referred to as Bayesian multi-day, BMD, calibration algorithm) is presented. It is based on a multiple-day model of sensor time-variability with second order statistical priors on its unknown parameters. In each patient-sensor realization, the numerical values of the calibration model parameters are determined by a Bayesian estimation procedure exploiting the BG samples sparsely collected by the patient. In addition, the distortion introduced by the BG-to-IG kinetics is compensated during parameter identification via non-parametric deconvolution. The BMD calibration algorithm is applied to two datasets acquired with the "present-generation" Dexcom (Dexcom Inc., San Diego, CA) G4 Platinum (DG4P) CGM sensor and a "next-generation" Dexcom CGM sensor prototype (NGD). In the DG4P dataset, results show that, despite the reduction of calibration frequency (on average from 2 per day to 0.25 per day), the BMD calibration algorithm significantly improves sensor accuracy compared to the manufacturer calibration algorithm. In the NGD dataset, performance is even better than that of present generation, allowing to further reduce calibrations toward zero. In Chapter 5, we analyze the potential margins for improvement of the BMD calibration algorithm and propose a further extension of the method. In particular, to cope with the inter-sensor and inter-subject variability, we propose a multi-model approach and a Bayesian model selection framework (referred to as multi-model Bayesian framework, MMBF) in which the most likely calibration model is chosen among a finite set of candidates. A preliminary assessment of the MMBF is conducted on synthetic data generated by a well-established type 1 diabetes simulation model. Results show a statistically significant accuracy improvement compared to the use of a unique calibration model. Finally, the major findings of the work carried out in this thesis, possible applications and margins for improvement are summarized in Chapter 6

    Development of a measurement error model of new factory-calibrated continuous glucose monitoring sensors used in type 1 diabetes therapy

    Get PDF
    After an overview of the T1D therapy, the techniques to measure the glucose concentration and a description of Dexcom G6 sensor are illustrated. Then, the issue of CGM sensors inaccuracy and the principal sources of error are presented. A complete description of the available data and of the data pre-processing is provided. Two-step and the single-step identification methods are proposed. The parameters are identified and results are analyzed

    GluGAN: Generating Personalized Glucose Time Series Using Generative Adversarial Networks

    Get PDF
    Time series data generated by continuous glucose monitoring sensors offer unparalleled opportunities for developing data-driven approaches, especially deep learning-based models, in diabetes management. Although these approaches have achieved state-of-the-art performance in various fields such as glucose prediction in type 1 diabetes (T1D), challenges remain in the acquisition of large-scale individual data for personalized modeling due to the elevated cost of clinical trials and data privacy regulations. In this work, we introduce GluGAN, a framework specifically designed for generating personalized glucose time series based on generative adversarial networks (GANs). Employing recurrent neural network (RNN) modules, the proposed framework uses a combination of unsupervised and supervised training to learn temporal dynamics in latent spaces. Aiming to assess the quality of synthetic data, we apply clinical metrics, distance scores, and discriminative and predictive scores computed by post-hoc RNNs in evaluation. Across three clinical datasets with 47 T1D subjects (including one publicly available and two proprietary datasets), GluGAN achieved better performance for all the considered metrics when compared with four baseline GAN models. The performance of data augmentation is evaluated by three machine learning-based glucose predictors. Using the training sets augmented by GluGAN significantly reduced the root mean square error for the predictors over 30 and 60-minute horizons. The results suggest that GluGAN is an effective method in generating high-quality synthetic glucose time series and has the potential to be used for evaluating the effectiveness of automated insulin delivery algorithms and as a digital twin to substitute for pre-clinical trials

    Personalised antimicrobial management in secondary care

    Get PDF
    Background: The growing threat of Antimicrobial Resistance (AMR) requires innovative methods to promote the sustainable effectiveness of antimicrobial agents. Hypothesis: This thesis aimed to explore the hypothesis that personalised decision support interventions have the utility to enhance antimicrobial management across secondary care. Methods: Different research methods were used to investigate this hypothesis. Individual physician decision making was mapped and patient experiences of engagement with decision making explored using semi-structured interviews. Cross-specialty engagement with antimicrobial management was investigated through cross-sectional analysis of conference abstracts and educational training curricula. Artificial intelligence tools were developed to explore their ability to predict the likelihood of infection and provide individualised prescribing recommendations using routine patient data. Dynamic, individualised dose optimisation was explored through: (i) development of a microneedle based, electrochemical biosensor for minimally invasive monitoring of beta-lactams; and (ii) pharmacokinetic (PK)-pharmacodynamic (PD) modelling of a new PK-PD index using C-Reactive protein (CRP) to predict the pharmacodynamics of vancomycin. Ethics approval was granted for all aspects of work explored within this thesis. Results: Mapping of individual physician decision making during infection management demonstrated several areas where personalised, technological interventions could enhance antimicrobial management. At specialty level, non-infection specialties have little engagement with antimicrobial management. The importance of engaging surgical specialties, who have relatively high rates of antimicrobial usage and healthcare associated infections, was observed. An individualised information leaflet, co-designed with patients, to provide personalised infection information to in-patients receiving antibiotics significantly improved knowledge and reported engagement with decision making. Artificial intelligence was able to enhance the prediction of infection and the prescribing of antimicrobials using routinely available clinical data. Real-time, continuous penicillin monitoring was demonstrated using a microneedle based electrochemical sensor in-vivo. A new PK-PD index, using C-Reactive Protein, was able to predict individual patient response to vancomycin therapy at 96-120 hours of therapy. Conclusion: Through co-design and the application of specific technologies it is possible to provide personalised antimicrobial management within secondary care.Open Acces

    Deep learning methods for improving diabetes management tools

    Get PDF
    Diabetes is a chronic disease that is characterised by a lack of regulation of blood glucose concentration in the body, and thus elevated blood glucose levels. Consequently, affected individuals can experience extreme variations in their blood glucose levels with exogenous insulin treatment. This has associated debilitating short-term and long-term complications that affect quality of life and can result in death in the worst instance. The development of technologies such as glucose meters and, more recently, continuous glucose monitors have offered the opportunity to develop systems towards improving clinical outcomes for individuals with diabetes through better glucose control. Data-driven methods can enable the development of the next generation of diabetes management tools focused on i) informativeness ii) safety and iii) easing the burden of management. This thesis aims to propose deep learning methods for improving the functionality of the variety of diabetes technology tools available for self-management. In the pursuit of the aforementioned goals, a number of deep learning methods are developed and geared towards improving the functionality of the existing diabetes technology tools, generally classified as i) self-monitoring of blood glucose ii) decision support systems and iii) artificial pancreas. These frameworks are primarily based on the prediction of glucose concentration levels. The first deep learning framework we propose is geared towards improving the artificial pancreas and decision support systems that rely on continuous glucose monitors. We first propose a convolutional recurrent neural network (CRNN) in order to forecast the glucose concentration levels over both short-term and long-term horizons. The predictive accuracy of this model outperforms those of traditional data-driven approaches. The feasibility of this proposed approach for ambulatory use is then demonstrated with the implementation of a decision support system on a smartphone application. We further extend CRNNs to the multitask setting to explore the effectiveness of leveraging population data for developing personalised models with limited individual data. We show that this enables earlier deployment of applications without significantly compromising performance and safety. The next challenge focuses on easing the burden of management by proposing a deep learning framework for automatic meal detection and estimation. The deep learning framework presented employs multitask learning and quantile regression to safely detect and estimate the size of unannounced meals with high precision. We also demonstrate that this facilitates automated insulin delivery for the artificial pancreas system, improving glycaemic control without significantly increasing the risk or incidence of hypoglycaemia. Finally, the focus shifts to improving self-monitoring of blood glucose (SMBG) with glucose meters. We propose an uncertainty-aware deep learning model based on a joint Gaussian Process and deep learning framework to provide end users with more dynamic and continuous information similar to continuous glucose sensors. Consequently, we show significant improvement in hyperglycaemia detection compared to the standard SMBG. We hope that through these methods, we can achieve a more equitable improvement in usability and clinical outcomes for individuals with diabetes.Open Acces

    Mobile Health Technologies

    Get PDF
    Mobile Health Technologies, also known as mHealth technologies, have emerged, amongst healthcare providers, as the ultimate Technologies-of-Choice for the 21st century in delivering not only transformative change in healthcare delivery, but also critical health information to different communities of practice in integrated healthcare information systems. mHealth technologies nurture seamless platforms and pragmatic tools for managing pertinent health information across the continuum of different healthcare providers. mHealth technologies commonly utilize mobile medical devices, monitoring and wireless devices, and/or telemedicine in healthcare delivery and health research. Today, mHealth technologies provide opportunities to record and monitor conditions of patients with chronic diseases such as asthma, Chronic Obstructive Pulmonary Diseases (COPD) and diabetes mellitus. The intent of this book is to enlighten readers about the theories and applications of mHealth technologies in the healthcare domain

    Next-generation, personalised, model-based critical care medicine : a state-of-the art review of in silico virtual patient models, methods, and cohorts, and how to validation them

    Get PDF
    © 2018 The Author(s). Critical care, like many healthcare areas, is under a dual assault from significantly increasing demographic and economic pressures. Intensive care unit (ICU) patients are highly variable in response to treatment, and increasingly aging populations mean ICUs are under increasing demand and their cohorts are increasingly ill. Equally, patient expectations are growing, while the economic ability to deliver care to all is declining. Better, more productive care is thus the big challenge. One means to that end is personalised care designed to manage the significant inter- and intra-patient variability that makes the ICU patient difficult. Thus, moving from current "one size fits all" protocolised care to adaptive, model-based "one method fits all" personalised care could deliver the required step change in the quality, and simultaneously the productivity and cost, of care. Computer models of human physiology are a unique tool to personalise care, as they can couple clinical data with mathematical methods to create subject-specific models and virtual patients to design new, personalised and more optimal protocols, as well as to guide care in real-time. They rely on identifying time varying patient-specific parameters in the model that capture inter- and intra-patient variability, the difference between patients and the evolution of patient condition. Properly validated, virtual patients represent the real patients, and can be used in silico to test different protocols or interventions, or in real-time to guide care. Hence, the underlying models and methods create the foundation for next generation care, as well as a tool for safely and rapidly developing personalised treatment protocols over large virtual cohorts using virtual trials. This review examines the models and methods used to create virtual patients. Specifically, it presents the models types and structures used and the data required. It then covers how to validate the resulting virtual patients and trials, and how these virtual trials can help design and optimise clinical trial. Links between these models and higher order, more complex physiome models are also discussed. In each section, it explores the progress reported up to date, especially on core ICU therapies in glycemic, circulatory and mechanical ventilation management, where high cost and frequency of occurrence provide a significant opportunity for model-based methods to have measurable clinical and economic impact. The outcomes are readily generalised to other areas of medical care
    corecore