3,156 research outputs found

    Utility of big data in predicting short-term blood glucose levels in type 1 diabetes mellitus through machine learning techniques

    Get PDF
    Machine learning techniques combined with wearable electronics can deliver accurate short-term blood glucose level prediction models. These models can learn personalized glucose–insulin dynamics based on the sensor data collected by monitoring several aspects of the physiological condition and daily activity of an individual. Until now, the prevalent approach for developing data-driven prediction models was to collect as much data as possible to help physicians and patients optimally adjust therapy. The objective of this work was to investigate the minimum data variety, volume, and velocity required to create accurate person-centric short-term prediction models. We developed a series of these models using different machine learning time series forecasting techniques suitable for execution within a wearable processor. We conducted an extensive passive patient monitoring study in real-world conditions to build an appropriate data set. The study involved a subset of type 1 diabetic subjects wearing a flash glucose monitoring system. We comparatively and quantitatively evaluated the performance of the developed data-driven prediction models and the corresponding machine learning techniques. Our results indicate that very accurate short-term prediction can be achieved by only monitoring interstitial glucose data over a very short time period and using a low sampling frequency. The models developed can predict glucose levels within a 15-min horizon with an average error as low as 15.43 mg/dL using only 24 historic values collected within a period of sex hours, and by increasing the sampling frequency to include 72 values, the average error is reduced to 10.15 mg/dL. Our prediction models are suitable for execution within a wearable device, requiring the minimum hardware requirements while at simultaneously achieving very high prediction accuracy.The authors would like to thank to the Endocrinology Department of the Morales Meseguer and Virgen de la Arrixaca hospitals of the city of Murcia (Spain). This work was sponsored by the Spanish Ministry of Economy and Competitiveness through 387 the PERSEIDES (ref. TIN2017-86885-R) and CHIST-ERA (ref. PCIN-2016-010) projects; by MINECO grant BES-2015-071956, and by the European Comission through the H2020-ENTROPY-649849 EU Project

    Diabetes and artificial intelligence (AI) beyond the closed loop: A review of the landscape, promise and challenges for AI-supported management and self-care for all diabetes types.

    Get PDF
    The discourse amongst diabetes specialists and academics regarding technology and artificial intelligence (AI) typically centres around the 10% of people with diabetes who have type 1 diabetes, focusing on glucose sensors, insulin pumps and, increasingly, closed-loop systems. This focus is reflected in conference topics, strategy documents, technology appraisals and funding streams. What is often overlooked is the wider application of data and AI, as demonstrated through published literature and emerging marketplace products, that offers promising avenues for enhanced clinical care, health-service efficiency and cost-effectiveness. This review provides an overview of AI techniques and explores the use and potential of AI and data-driven systems in a broad context, covering all diabetes types, encompassing: (1) patient education and self-management; (2) clinical decision support systems and predictive analytics, including diagnostic support, treatment and screening advice, complications prediction; and (3) the use of multimodal data, such as imaging or genetic data. The review provides a perspective on how data- and AI-driven systems could transform diabetes care in the coming years and how they could be integrated into daily clinical practice. We discuss evidence for benefits and potential harms, and consider existing barriers to scalable adoption, including challenges related to data availability and exchange, health inequality, clinician hesitancy and regulation. Stakeholders, including clinicians, academics, commissioners, policymakers and those with lived experience, must proactively collaborate to realise the potential benefits that AI-supported diabetes care could bring, whilst mitigating risk and navigating the challenges along the way.</p

    Prediction of Concurrent Hypertensive Disorders in Pregnancy and Gestational Diabetes Mellitus Using Machine Learning Techniques

    Get PDF
    Gestational diabetes mellitus and hypertensive disorders in pregnancy are serious maternal health conditions with immediate and lifelong mother-child health consequences. These obstetric pathologies have been widely investigated, but mostly in silos, while studies focusing on their simultaneous occurrence rarely exist. This is especially the case in the machine learning domain. This retrospective study sought to investigate, construct, evaluate, compare, and isolate a supervised machine learning predictive model for the binary classification of co-occurring gestational diabetes mellitus and hypertensive disorders in pregnancy in a cohort of otherwise healthy pregnant women. To accomplish the stated aims, this study analyzed an extract (n=4624, n_features=38) of a labelled maternal perinatal dataset (n=9967, n_fields=79) collected by the PeriData.Net¼ database from a participating community hospital in Southeast Wisconsin between 2013 and 2018. The datasets were named, “WiseSample” and “WiseSubset” respectively in this study. Thirty-three models were constructed with the six supervised machine learning algorithms explored on the extracted dataset: logistic regression, random forest, decision tree, support vector machine, StackingClassifier, and KerasClassifier, which is a deep learning classification algorithm; all were evaluated using the StratifiedKfold cross-validation (k=10) method. The Synthetic Minority Oversampling Technique was applied to the training data to resolve the class imbalance that was noted in the sub-sample at the preprocessing phase. A wide range of evidence-based feature selection techniques were used to identify the best predictors of the comorbidity under investigation. Multiple model performance evaluation metrics that were employed to quantitatively evaluate and compare model performance quality include accuracy, F1, precision, recall, and the area under the receiver operating characteristic curve. Support Vector Machine objectively emerged as the most generalizable model for identifying the gravidae in WiseSubset who may develop concurrent gestational diabetes mellitus and hypertensive disorders in pregnancy, scoring 100.00% (mean) in recall. The model consisted of 9 predictors extracted by the recursive feature elimination with cross-validation with random forest. Finding from this study show that appropriate machine learning methods can reliably predict comorbid gestational diabetes and hypertensive disorders in pregnancy, using readily available routine prenatal attributes. Six of the nine most predictive factors of the comorbidity were also in the top 6 selections of at least one other feature selection method examined. The six predictors are healthy weight prepregnancy BMI, mother’s educational status, husband’s educational status, husband’s occupation in one year before the current pregnancy, mother’s blood group, and mother’s age range between 34 and 44 years. Insight from this analysis would support clinical decision making of obstetric experts when they are caring for 1.) nulliparous women, since they would have no obstetric history that could prompt their care providers for feto-maternal medical surveillance; and 2.) the experienced mothers with no obstetric history suggestive of any of the disease(s) under this study. Hence, among other benefits, the artificial-intelligence-backed tool designed in this research would likely improve maternal and child care quality outcomes

    Machine Learning for Diabetes and Mortality Risk Prediction From Electronic Health Records

    Get PDF
    Data science can provide invaluable tools to better exploit healthcare data to improve patient outcomes and increase cost-effectiveness. Today, electronic health records (EHR) systems provide a fascinating array of data that data science applications can use to revolutionise the healthcare industry. Utilising EHR data to improve the early diagnosis of a variety of medical conditions/events is a rapidly developing area that, if successful, can help to improve healthcare services across the board. Specifically, as Type-2 Diabetes Mellitus (T2DM) represents one of the most serious threats to health across the globe, analysing the huge volumes of data provided by EHR systems to investigate approaches for early accurately predicting the onset of T2DM, and medical events such as in-hospital mortality, are two of the most important challenges data science currently faces. The present thesis addresses these challenges by examining the research gaps in the existing literature, pinpointing the un-investigated areas, and proposing a novel machine learning modelling given the difficulties inherent in EHR data. To achieve these aims, the present thesis firstly introduces a unique and large EHR dataset collected from Saudi Arabia. Then we investigate the use of a state-of-the-art machine learning predictive models that exploits this dataset for diabetes diagnosis and the early identification of patients with pre-diabetes by predicting the blood levels of one of the main indicators of diabetes and pre-diabetes: elevated Glycated Haemoglobin (HbA1c) levels. A novel collaborative denoising autoencoder (Col-DAE) framework is adopted to predict the diabetes (high) HbA1c levels. We also employ several machine learning approaches (random forest, logistic regression, support vector machine, and multilayer perceptron) for the identification of patients with pre-diabetes (elevated HbA1c levels). The models employed demonstrate that a patient's risk of diabetes/pre-diabetes can be reliably predicted from EHR records. We then extend this work to include pioneering adoption of recent technologies to investigate the outcomes of the predictive models employed by using recent explainable methods. This work also investigates the effect of using longitudinal data and more of the features available in the EHR systems on the performance and features ranking of the employed machine learning models for predicting elevated HbA1c levels in non-diabetic patients. This work demonstrates that longitudinal data and available EHR features can improve the performance of the machine learning models and can affect the relative order of importance of the features. Secondly, we develop a machine learning model for the early and accurate prediction all in-hospital mortality events for such patients utilising EHR data. This work investigates a novel application of the Stacked Denoising Autoencoder (SDA) to predict in-hospital patient mortality risk. In doing so, we demonstrate how our approach uniquely overcomes the issues associated with imbalanced datasets to which existing solutions are subject. The proposed model –– using clinical patient data on a variety of health conditions and without intensive feature engineering –– is demonstrated to achieve robust and promising results using EHR patient data recorded during the first 24 hours after admission

    A Comparison of Feature Selection and Forecasting Machine Learning Algorithms for Predicting Glycaemia in Type 1 Diabetes Mellitus

    Get PDF
    Type 1 diabetes mellitus (DM1) is a metabolic disease derived from falls in pancreatic insulin production resulting in chronic hyperglycemia. DM1 subjects usually have to undertake a number of assessments of blood glucose levels every day, employing capillary glucometers for the monitoring of blood glucose dynamics. In recent years, advances in technology have allowed for the creation of revolutionary biosensors and continuous glucose monitoring (CGM) techniques. This has enabled the monitoring of a subject’s blood glucose level in real time. On the other hand, few attempts have been made to apply machine learning techniques to predicting glycaemia levels, but dealing with a database containing such a high level of variables is problematic. In this sense, to the best of the authors’ knowledge, the issues of proper feature selection (FS)—the stage before applying predictive algorithms—have not been subject to in-depth discussion and comparison in past research when it comes to forecasting glycaemia. Therefore, in order to assess how a proper FS stage could improve the accuracy of the glycaemia forecasted, this work has developed six FS techniques alongside four predictive algorithms, applying them to a full dataset of biomedical features related to glycaemia. These were harvested through a wide-ranging passive monitoring process involving 25 patients with DM1 in practical real-life scenarios. From the obtained results, we affirm that Random Forest (RF) as both predictive algorithm and FS strategy offers the best average performance (Root Median Square Error, RMSE = 18.54 mg/dL) throughout the 12 considered predictive horizons (up to 60 min in steps of 5 min), showing Support Vector Machines (SVM) to have the best accuracy as a forecasting algorithm when considering, in turn, the average of the six FS techniques applied (RMSE = 20.58 mg/dL)

    Applications of the Internet of Medical Things to Type 1 Diabetes Mellitus

    Get PDF
    Type 1 Diabetes Mellitus (DM1) is a condition of the metabolism typified by persistent hyperglycemia as a result of insufficient pancreatic insulin synthesis. This requires patients to be aware of their blood glucose level oscillations every day to deduce a pattern and anticipate future glycemia, and hence, decide the amount of insulin that must be exogenously injected to maintain glycemia within the target range. This approach often suffers from a relatively high imprecision, which can be dangerous. Nevertheless, current developments in Information and Communication Technologies (ICT) and innovative sensors for biological signals that might enable a continuous, complete assessment of the patient’s health provide a fresh viewpoint on treating DM1. With this, we observe that current biomonitoring devices and Continuous Glucose Monitoring (CGM) units can easily obtain data that allow us to know at all times the state of glycemia and other variables that influence its oscillations. A complete review has been made of the variables that influence glycemia in a T1DM patient and that can be measured by the above means. The communications systems necessary to transfer the information collected to a more powerful computational environment, which can adequately handle the amounts of data collected, have also been described. From this point, intelligent data analysis extracts knowledge from the data and allows predictions to be made in order to anticipate risk situations. With all of the above, it is necessary to build a holistic proposal that allows the complete and smart management of T1DM. This approach evaluates a potential shortage of such suggestions and the obstacles that future intelligent IoMT-DM1 management systems must surmount. Lastly, we provide an outline of a comprehensive IoMT-based proposal for DM1 management that aims to address the limits of prior studies while also using the disruptive technologies highlighted beforePartial funding for open access charge: Universidad de Málag

    Non-communicable Diseases, Big Data and Artificial Intelligence

    Get PDF
    This reprint includes 15 articles in the field of non-communicable Diseases, big data, and artificial intelligence, overviewing the most recent advances in the field of AI and their application potential in 3P medicine

    2019 Conference Abstracts: Annual Undergraduate Research Conference at the Interface of Biology and Mathematics

    Get PDF
    Schedule and abstract book for the Eleventh Annual Undergraduate Research Conference at the Interface of Biology and Mathematics Date: November 16-17, 2019Location: UT Conference Center, KnoxvilleKeynote Speaker: Sadie Ryan, Medical Geography, Univ. of Florida; Director, Quantitative Disease Ecology & Conservation Lab (QDEC Lab)Featured Speaker: Christopher Strickland, Mathematics, Univ. of Tennessee, Knoxvill
    • 

    corecore