428 research outputs found

    Vital signs prediction and early warning score calculation based on continuous monitoring of hospitalised patients using wearable technology

    Get PDF
    In this prospective, interventional, international study, we investigate continuous monitoring of hospitalised patients’ vital signs using wearable technology as a basis for real-time early warning scores (EWS) estimation and vital signs time-series prediction. The collected continuous monitored vital signs are heart rate, blood pressure, respiration rate, and oxygen saturation of a heterogeneous patient population hospitalised in cardiology, postsurgical, and dialysis wards. Two aspects are elaborated in this study. The first is the high-rate (every minute) estimation of the statistical values (e.g., minimum and mean) of the vital signs components of the EWS for one-minute segments in contrast with the conventional routine of 2 to 3 times per day. The second aspect explores the use of a hybrid machine learning algorithm of kNN-LS-SVM for predicting future values of monitored vital signs. It is demonstrated that a real-time implementation of EWS in clinical practice is possible. Furthermore, we showed a promising prediction performance of vital signs compared to the most recent state of the art of a boosted approach of LSTM. The reported mean absolute percentage errors of predicting one-hour averaged heart rate are 4.1, 4.5, and 5% for the upcoming one, two, and three hours respectively for cardiology patients. The obtained results in this study show the potential of using wearable technology to continuously monitor the vital signs of hospitalised patients as the real-time estimation of EWS in addition to a reliable prediction of the future values of these vital signs is presented. Ultimately, both approaches of high-rate EWS computation and vital signs time-series prediction is promising to provide efficient cost-utility, ease of mobility and portability, streaming analytics, and early warning for vital signs deterioration

    Towards Better Long-range Time Series Forecasting using Generative Forecasting

    Full text link
    Long-range time series forecasting is usually based on one of two existing forecasting strategies: Direct Forecasting and Iterative Forecasting, where the former provides low bias, high variance forecasts and the latter leads to low variance, high bias forecasts. In this paper, we propose a new forecasting strategy called Generative Forecasting (GenF), which generates synthetic data for the next few time steps and then makes long-range forecasts based on generated and observed data. We theoretically prove that GenF is able to better balance the forecasting variance and bias, leading to a much smaller forecasting error. We implement GenF via three components: (i) a novel conditional Wasserstein Generative Adversarial Network (GAN) based generator for synthetic time series data generation, called CWGAN-TS. (ii) a transformer based predictor, which makes long-range predictions using both generated and observed data. (iii) an information theoretic clustering algorithm to improve the training of both the CWGAN-TS and the transformer based predictor. The experimental results on five public datasets demonstrate that GenF significantly outperforms a diverse range of state-of-the-art benchmarks and classical approaches. Specifically, we find a 5% - 11% improvement in predictive performance (mean absolute error) while having a 15% - 50% reduction in parameters compared to the benchmarks. Lastly, we conduct an ablation study to further explore and demonstrate the effectiveness of the components comprising GenF.Comment: 14 pages. arXiv admin note: substantial text overlap with arXiv:2110.0877

    A review of Generative Adversarial Networks for Electronic Health Records: applications, evaluation measures and data sources

    Full text link
    Electronic Health Records (EHRs) are a valuable asset to facilitate clinical research and point of care applications; however, many challenges such as data privacy concerns impede its optimal utilization. Deep generative models, particularly, Generative Adversarial Networks (GANs) show great promise in generating synthetic EHR data by learning underlying data distributions while achieving excellent performance and addressing these challenges. This work aims to review the major developments in various applications of GANs for EHRs and provides an overview of the proposed methodologies. For this purpose, we combine perspectives from healthcare applications and machine learning techniques in terms of source datasets and the fidelity and privacy evaluation of the generated synthetic datasets. We also compile a list of the metrics and datasets used by the reviewed works, which can be utilized as benchmarks for future research in the field. We conclude by discussing challenges in GANs for EHRs development and proposing recommended practices. We hope that this work motivates novel research development directions in the intersection of healthcare and machine learning

    A Knowledge Distillation Ensemble Framework for Predicting Short and Long-term Hospitalisation Outcomes from Electronic Health Records Data

    Get PDF
    The ability to perform accurate prognosis of patients is crucial for proactive clinical decision making, informed resource management and personalised care. Existing outcome prediction models suffer from a low recall of infrequent positive outcomes. We present a highly-scalable and robust machine learning framework to automatically predict adversity represented by mortality and ICU admission from time-series vital signs and laboratory results obtained within the first 24 hours of hospital admission. The stacked platform comprises two components: a) an unsupervised LSTM Autoencoder that learns an optimal representation of the time-series, using it to differentiate the less frequent patterns which conclude with an adverse event from the majority patterns that do not, and b) a gradient boosting model, which relies on the constructed representation to refine prediction, incorporating static features of demographics, admission details and clinical summaries. The model is used to assess a patient's risk of adversity over time and provides visual justifications of its prediction based on the patient's static features and dynamic signals. Results of three case studies for predicting mortality and ICU admission show that the model outperforms all existing outcome prediction models, achieving PR-AUC of 0.891 (95% CI: 0.878 - 0.969) in predicting mortality in ICU and general ward settings and 0.908 (95% CI: 0.870-0.935) in predicting ICU admission.Comment: 14 page

    A survey of generative adversarial networks for synthesizing structured electronic health records

    Get PDF
    Electronic Health Records (EHRs) are a valuable asset to facilitate clinical research and point of care applications; however, many challenges such as data privacy concerns impede its optimal utilization. Deep generative models, particularly, Generative Adversarial Networks (GANs) show great promise in generating synthetic EHR data by learning underlying data distributions while achieving excellent performance and addressing these challenges. This work aims to survey the major developments in various applications of GANs for EHRs and provides an overview of the proposed methodologies. For this purpose, we combine perspectives from healthcare applications and machine learning techniques in terms of source datasets and the fidelity and privacy evaluation of the generated synthetic datasets. We also compile a list of the metrics and datasets used by the reviewed works, which can be utilized as benchmarks for future research in the field. We conclude by discussing challenges in GANs for EHRs development and proposing recommended practices. We hope that this work motivates novel research development directions in the intersection of healthcare and machine learning

    Development of Artificial Intelligence Algorithms for Early Diagnosis of Sepsis

    Get PDF
    Sepsis is a prevalent syndrome that manifests itself through an uncontrolled response from the body to an infection, that may lead to organ dysfunction. Its diagnosis is urgent since early treatment can reduce the patients’ chances of having long-term consequences. Yet, there are many obstacles to achieving this early detection. Some stem from the syndrome’s pathogenesis, which lacks a characteristic biomarker. The available clinical detection tools are either too complex or lack sensitivity, in both cases delaying the diagnosis. Another obstacle relates to modern technology, that when paired with the many clinical parameters that are monitored to detect sepsis, result in extremely heterogenous and complex medical records, which constitute a big obstacle for the responsible clinicians, that are forced to analyse them to diagnose the syndrome. To help achieve this early diagnosis, as well as understand which parameters are most relevant to obtain it, an approach based on the use of Artificial Intelligence algorithms is proposed in this work, with the model being implemented in the alert system of a sepsis monitoring platform. This platform uses a Random Forest algorithm, based on supervised machine learning classification, that is capable of detecting the syndrome in two different scenarios. The earliest detection can happen if there are only five vital sign parameters available for measurement, namely heart rate, systolic and diastolic blood pressures, blood oxygen saturation level, and body temperature, in which case, the model has a score of 83% precision and 62% sensitivity. If besides the mentioned variables, laboratory analysis measurements of bilirubin, creatinine, hemoglobin, leukocytes, platelet count, and Creactive protein levels are available, the platform’s sensitivity increases to 77%. With this, it has also been found that the blood oxygen saturation level is one of the most important variables to take into account for the task, in both cases. Once the platform is tested in real clinical situations, together with an increase in the available clinical data, it is believed that the platform’s performance will be even better.A sépsis é uma síndrome com elevada incidência a nível global, que se manifesta através de uma resposta desregulada por parte do organismo a uma infeção, podendo resultar em disfunções orgânicas generalizadas. O diagnóstico da mesma é urgente, uma vez que um tratamento precoce pode reduzir as hipóteses de consequências a longo prazo para os doentes. Apesar desta necessidade, existem vários obstáculos. Alguns deles advêm da patogenia da síndrome, que carece de um biomarcador específico. As ferramentas de deteção clínica são demasiado complexas, ou pouco sensíveis, em ambos os casos atrasando o diagnóstico. Outro obstáculo relaciona-se com os avanços da tecnologia, que, com os vários parâmetros clínicos que são monitorizados, resulta em registos médicos heterogéneos e complexos, o que constitui um grande obstáculo para os profissionais de saúde, que se vêm forçados a analisá-los para diagnosticar a síndrome. Para atingir este diagnóstico precoce, bem como compreender quais os parâmetros mais relevantes para o alcançar, é proposta neste trabalho uma abordagem baseada num algoritmo de Inteligência Artificial, sendo o modelo implementado no sistema de alerta de uma plataforma de monitorização de sépsis. Esta plataforma utiliza um classificador Random Forest baseado em aprendizagem automática supervisionada, capaz de diagnosticar a síndrome de duas formas. Uma deteção mais precoce pode ocorrer através de cinco parâmetros vitais, nomeadamente frequência cardíaca, pressão arterial sistólica e diastólica, nível de saturação de oxigénio no sangue e temperatura corporal, caso em que o modelo atinge valores de 83% de precisão e 62% de sensibilidade. Se, para além das variáveis mencionadas, estiverem disponíveis análises laboratoriais de bilirrubina, creatinina, hemoglobina, leucócitos, contagem de plaquetas e níveis de proteína C-reativa, a sensibilidade da plataforma sobre para 77%. Concluiu-se que o nível de saturação de oxigénio no sangue é uma das variáveis mais importantes a ter em conta para o diagnóstico, em ambos os casos. A partir do momento que a plataforma venha a ser utilizada em situações clínicas reais, com o consequente aumento dos dados disponíveis, crê-se que o desempenho venha a ser ainda melhor
    corecore