2,512 research outputs found

    Continuous glucose monitoring sensors: Past, present and future algorithmic challenges

    Get PDF
    Continuous glucose monitoring (CGM) sensors are portable devices that allow measuring and visualizing the glucose concentration in real time almost continuously for several days and are provided with hypo/hyperglycemic alerts and glucose trend information. CGM sensors have revolutionized Type 1 diabetes (T1D) management, improving glucose control when used adjunctively to self-monitoring blood glucose systems. Furthermore, CGM devices have stimulated the development of applications that were impossible to create without a continuous-time glucose signal, e.g., real-time predictive alerts of hypo/hyperglycemic episodes based on the prediction of future glucose concentration, automatic basal insulin attenuation methods for hypoglycemia prevention, and the artificial pancreas. However, CGM sensors’ lack of accuracy and reliability limited their usability in the clinical practice, calling upon the academic community for the development of suitable signal processing methods to improve CGM performance. The aim of this paper is to review the past and present algorithmic challenges of CGM sensors, to show how they have been tackled by our research group, and to identify the possible future ones

    Study of Short-Term Personalized Glucose Predictive Models on Type-1 Diabetic Children

    Full text link
    Research in diabetes, especially when it comes to building data-driven models to forecast future glucose values, is hindered by the sensitive nature of the data. Because researchers do not share the same data between studies, progress is hard to assess. This paper aims at comparing the most promising algorithms in the field, namely Feedforward Neural Networks (FFNN), Long Short-Term Memory (LSTM) Recurrent Neural Networks, Extreme Learning Machines (ELM), Support Vector Regression (SVR) and Gaussian Processes (GP). They are personalized and trained on a population of 10 virtual children from the Type 1 Diabetes Metabolic Simulator software to predict future glucose values at a prediction horizon of 30 minutes. The performances of the models are evaluated using the Root Mean Squared Error (RMSE) and the Continuous Glucose-Error Grid Analysis (CG-EGA). While most of the models end up having low RMSE, the GP model with a Dot-Product kernel (GP-DP), a novel usage in the context of glucose prediction, has the lowest. Despite having good RMSE values, we show that the models do not necessarily exhibit a good clinical acceptability, measured by the CG-EGA. Only the LSTM, SVR and GP-DP models have overall acceptable results, each of them performing best in one of the glycemia regions

    Multi-modal Predictive Models of Diabetes Progression

    Full text link
    With the increasing availability of wearable devices, continuous monitoring of individuals' physiological and behavioral patterns has become significantly more accessible. Access to these continuous patterns about individuals' statuses offers an unprecedented opportunity for studying complex diseases and health conditions such as type 2 diabetes (T2D). T2D is a widely common chronic disease that its roots and progression patterns are not fully understood. Predicting the progression of T2D can inform timely and more effective interventions to prevent or manage the disease. In this study, we have used a dataset related to 63 patients with T2D that includes the data from two different types of wearable devices worn by the patients: continuous glucose monitoring (CGM) devices and activity trackers (ActiGraphs). Using this dataset, we created a model for predicting the levels of four major biomarkers related to T2D after a one-year period. We developed a wide and deep neural network and used the data from the demographic information, lab tests, and wearable sensors to create the model. The deep part of our method was developed based on the long short-term memory (LSTM) structure to process the time-series dataset collected by the wearables. In predicting the patterns of the four biomarkers, we have obtained a root mean square error of 1.67% for HBA1c, 6.22 mg/dl for HDL cholesterol, 10.46 mg/dl for LDL cholesterol, and 18.38 mg/dl for Triglyceride. Compared to existing models for studying T2D, our model offers a more comprehensive tool for combining a large variety of factors that contribute to the disease

    Vital Signs Monitoring and Interpretation for Critically Ill Patients

    Get PDF

    Machine Learning for Diabetes and Mortality Risk Prediction From Electronic Health Records

    Get PDF
    Data science can provide invaluable tools to better exploit healthcare data to improve patient outcomes and increase cost-effectiveness. Today, electronic health records (EHR) systems provide a fascinating array of data that data science applications can use to revolutionise the healthcare industry. Utilising EHR data to improve the early diagnosis of a variety of medical conditions/events is a rapidly developing area that, if successful, can help to improve healthcare services across the board. Specifically, as Type-2 Diabetes Mellitus (T2DM) represents one of the most serious threats to health across the globe, analysing the huge volumes of data provided by EHR systems to investigate approaches for early accurately predicting the onset of T2DM, and medical events such as in-hospital mortality, are two of the most important challenges data science currently faces. The present thesis addresses these challenges by examining the research gaps in the existing literature, pinpointing the un-investigated areas, and proposing a novel machine learning modelling given the difficulties inherent in EHR data. To achieve these aims, the present thesis firstly introduces a unique and large EHR dataset collected from Saudi Arabia. Then we investigate the use of a state-of-the-art machine learning predictive models that exploits this dataset for diabetes diagnosis and the early identification of patients with pre-diabetes by predicting the blood levels of one of the main indicators of diabetes and pre-diabetes: elevated Glycated Haemoglobin (HbA1c) levels. A novel collaborative denoising autoencoder (Col-DAE) framework is adopted to predict the diabetes (high) HbA1c levels. We also employ several machine learning approaches (random forest, logistic regression, support vector machine, and multilayer perceptron) for the identification of patients with pre-diabetes (elevated HbA1c levels). The models employed demonstrate that a patient's risk of diabetes/pre-diabetes can be reliably predicted from EHR records. We then extend this work to include pioneering adoption of recent technologies to investigate the outcomes of the predictive models employed by using recent explainable methods. This work also investigates the effect of using longitudinal data and more of the features available in the EHR systems on the performance and features ranking of the employed machine learning models for predicting elevated HbA1c levels in non-diabetic patients. This work demonstrates that longitudinal data and available EHR features can improve the performance of the machine learning models and can affect the relative order of importance of the features. Secondly, we develop a machine learning model for the early and accurate prediction all in-hospital mortality events for such patients utilising EHR data. This work investigates a novel application of the Stacked Denoising Autoencoder (SDA) to predict in-hospital patient mortality risk. In doing so, we demonstrate how our approach uniquely overcomes the issues associated with imbalanced datasets to which existing solutions are subject. The proposed model –– using clinical patient data on a variety of health conditions and without intensive feature engineering –– is demonstrated to achieve robust and promising results using EHR patient data recorded during the first 24 hours after admission

    Challenges in biomedical data science: data-driven solutions to clinical questions

    Get PDF
    Data are influencing every aspect of our lives, from our work activities, to our spare time and even to our health. In this regard, medical diagnosis and treatments are often supported by quantitative measures and observations, such as laboratory tests, medical imaging or genetic analysis. In medicine, as well as in several other scientific domains, the amount of data involved in each decision-making process has become overwhelming. The complexity of the phenomena under investigation and the scale of modern data collections has long superseded human analysis and insights potential

    Ensemble methods for meningitis aetiology diagnosis

    Get PDF
    In this work, we explore data-driven techniques for the fast and early diagnosis concerning the etiological origin of meningitis, more specifically with regard to differentiating between viral and bacterial meningitis. We study how machine learning can be used to predict meningitis aetiology once a patient has been diagnosed with this disease. We have a dataset of 26,228 patients described by 19 attributes, mainly about the patient's observable symptoms and the early results of the cerebrospinal fluid analysis. Using this dataset, we have explored several techniques of dataset sampling, feature selection and classification models based both on ensemble methods and on simple techniques (mainly, decision trees). Experiments with 27 classification models (19 of them involving ensemble methods) have been conducted for this paper. Our main finding is that the combination of ensemble methods with decision trees leads to the best meningitis aetiology classifiers. The best performance indicator values (precision, recall and f-measure of 89% and an AUC value of 95%) have been achieved by the synergy between bagging and NBTrees. Nonetheless, our results also suggest that the combination of ensemble methods with certain decision tree clearly improves the performance of diagnosis in comparison with those obtained with only the corresponding decision tree.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. We would like to thank the Health Department of the Brazilian Government for providing the dataset and for authorizing its use in this study. We would also like to express our gratitude to the reviewers for their thoughtful comments and efforts towards improving our manuscript. Funding for open access charge: Universidad de Málaga / CBUA

    Applications of the Internet of Medical Things to Type 1 Diabetes Mellitus

    Get PDF
    Type 1 Diabetes Mellitus (DM1) is a condition of the metabolism typified by persistent hyperglycemia as a result of insufficient pancreatic insulin synthesis. This requires patients to be aware of their blood glucose level oscillations every day to deduce a pattern and anticipate future glycemia, and hence, decide the amount of insulin that must be exogenously injected to maintain glycemia within the target range. This approach often suffers from a relatively high imprecision, which can be dangerous. Nevertheless, current developments in Information and Communication Technologies (ICT) and innovative sensors for biological signals that might enable a continuous, complete assessment of the patient’s health provide a fresh viewpoint on treating DM1. With this, we observe that current biomonitoring devices and Continuous Glucose Monitoring (CGM) units can easily obtain data that allow us to know at all times the state of glycemia and other variables that influence its oscillations. A complete review has been made of the variables that influence glycemia in a T1DM patient and that can be measured by the above means. The communications systems necessary to transfer the information collected to a more powerful computational environment, which can adequately handle the amounts of data collected, have also been described. From this point, intelligent data analysis extracts knowledge from the data and allows predictions to be made in order to anticipate risk situations. With all of the above, it is necessary to build a holistic proposal that allows the complete and smart management of T1DM. This approach evaluates a potential shortage of such suggestions and the obstacles that future intelligent IoMT-DM1 management systems must surmount. Lastly, we provide an outline of a comprehensive IoMT-based proposal for DM1 management that aims to address the limits of prior studies while also using the disruptive technologies highlighted beforePartial funding for open access charge: Universidad de Málag
    • …
    corecore