63,345 research outputs found

    E-Quarantine: A Smart Health System for Monitoring Coronavirus Patients for Remotely Quarantine

    Full text link
    Coronavirus becomes officially a global pandemic due to the speed spreading off in various countries. An increasing number of infected with this disease causes the Inability problem to fully care in hospitals and afflict many doctors and nurses inside the hospitals. This paper proposes a smart health system that monitors the patients holding the Coronavirus remotely. Due to protect the lives of the health services members (like physicians and nurses) from infection. This smart system observes the people with this disease based on putting many sensors to record many features of their patients in every second. These parameters include measuring the patient's temperature, respiratory rate, pulse rate, blood pressure, and time. The proposed system saves lives and improves making decisions in dangerous cases. It proposes using artificial intelligence and Internet-of-things to make remotely quarantine and develop decisions in various situations. It provides monitoring patients remotely and guarantees giving patients medicines and getting complete health care without anyone getting sick with this disease. It targets two people's slides the most serious medical conditions and infection and the lowest serious medical conditions in their houses. Observing in hospitals for the most serious medical cases that cause infection in thousands of healthcare members so there is a big need to uses it. Other less serious patients slide, this system enables physicians to monitor patients and get the healthcare from patient's houses to save places for the critical cases in hospitals.Comment: 27 Pages, 9 Figures, and 7 Table

    Multitask learning and benchmarking with clinical time series data

    Full text link
    Health care is one of the most exciting frontiers in data mining and machine learning. Successful adoption of electronic health records (EHRs) created an explosion in digital clinical data available for analysis, but progress in machine learning for healthcare research has been difficult to measure because of the absence of publicly available benchmark data sets. To address this problem, we propose four clinical prediction benchmarks using data derived from the publicly available Medical Information Mart for Intensive Care (MIMIC-III) database. These tasks cover a range of clinical problems including modeling risk of mortality, forecasting length of stay, detecting physiologic decline, and phenotype classification. We propose strong linear and neural baselines for all four tasks and evaluate the effect of deep supervision, multitask training and data-specific architectural modifications on the performance of neural models.Comment: This version of the paper adds details about the generation of the benchmark tasks and describes improved neural baseline

    A predictive analytics approach to reducing avoidable hospital readmission

    Full text link
    Hospital readmission has become a critical metric of quality and cost of healthcare. Medicare anticipates that nearly $17 billion is paid out on the 20% of patients who are readmitted within 30 days of discharge. Although several interventions such as transition care management and discharge reengineering have been practiced in recent years, the effectiveness and sustainability depends on how well they can identify and target patients at high risk of rehospitalization. Based on the literature, most current risk prediction models fail to reach an acceptable accuracy level; none of them considers patient's history of readmission and impacts of patient attribute changes over time; and they often do not discriminate between planned and unnecessary readmissions. Tackling such drawbacks, we develop a new readmission metric based on administrative data that can identify potentially avoidable readmissions from all other types of readmission. We further propose a tree based classification method to estimate the predicted probability of readmission that can directly incorporate patient's history of readmission and risk factors changes over time. The proposed methods are validated with 2011-12 Veterans Health Administration data from inpatients hospitalized for heart failure, acute myocardial infarction, pneumonia, or chronic obstructive pulmonary disease in the State of Michigan. Results shows improved discrimination power compared to the literature (c-statistics>80%) and good calibration.Comment: 30 pages, 4 figures, 7 table

    Identifying Diabetic Patients with High Risk of Readmission

    Full text link
    Hospital readmissions are expensive and reflect the inadequacies in healthcare system. In the United States alone, treatment of readmitted diabetic patients exceeds 250 million dollars per year. Early identification of patients facing a high risk of readmission can enable healthcare providers to to conduct additional investigations and possibly prevent future readmissions. This not only improves the quality of care but also reduces the medical expenses on readmission. Machine learning methods have been leveraged on public health data to build a system for identifying diabetic patients facing a high risk of future readmission. Number of inpatient visits, discharge disposition and admission type were identified as strong predictors of readmission. Further, it was found that the number of laboratory tests and discharge disposition together predict whether the patient will be readmitted shortly after being discharged from the hospital (i.e. <30 days) or after a longer period of time (i.e. >30 days). These insights can help healthcare providers to improve inpatient diabetic care. Finally, the cost analysis suggests that \$252.76 million can be saved across 98,053 diabetic patient encounters by incorporating the proposed cost sensitive analysis model.Comment: 10 pages, 5 figures, 7 table

    Machine learning and AI research for Patient Benefit: 20 Critical Questions on Transparency, Replicability, Ethics and Effectiveness

    Full text link
    Machine learning (ML), artificial intelligence (AI) and other modern statistical methods are providing new opportunities to operationalize previously untapped and rapidly growing sources of data for patient benefit. Whilst there is a lot of promising research currently being undertaken, the literature as a whole lacks: transparency; clear reporting to facilitate replicability; exploration for potential ethical concerns; and, clear demonstrations of effectiveness. There are many reasons for why these issues exist, but one of the most important that we provide a preliminary solution for here is the current lack of ML/AI- specific best practice guidance. Although there is no consensus on what best practice looks in this field, we believe that interdisciplinary groups pursuing research and impact projects in the ML/AI for health domain would benefit from answering a series of questions based on the important issues that exist when undertaking work of this nature. Here we present 20 questions that span the entire project life cycle, from inception, data analysis, and model evaluation, to implementation, as a means to facilitate project planning and post-hoc (structured) independent evaluation. By beginning to answer these questions in different settings, we can start to understand what constitutes a good answer, and we expect that the resulting discussion will be central to developing an international consensus framework for transparent, replicable, ethical and effective research in artificial intelligence (AI-TREE) for health.Comment: 25 pages, 2 boxes, 1 figur

    Patient2Vec: A Personalized Interpretable Deep Representation of the Longitudinal Electronic Health Record

    Full text link
    The wide implementation of electronic health record (EHR) systems facilitates the collection of large-scale health data from real clinical settings. Despite the significant increase in adoption of EHR systems, this data remains largely unexplored, but presents a rich data source for knowledge discovery from patient health histories in tasks such as understanding disease correlations and predicting health outcomes. However, the heterogeneity, sparsity, noise, and bias in this data present many complex challenges. This complexity makes it difficult to translate potentially relevant information into machine learning algorithms. In this paper, we propose a computational framework, Patient2Vec, to learn an interpretable deep representation of longitudinal EHR data which is personalized for each patient. To evaluate this approach, we apply it to the prediction of future hospitalizations using real EHR data and compare its predictive performance with baseline methods. Patient2Vec produces a vector space with meaningful structure and it achieves an AUC around 0.799 outperforming baseline methods. In the end, the learned feature importance can be visualized and interpreted at both the individual and population levels to bring clinical insights.Comment: Accepted by IEEE Acces

    Simultaneous Modeling of Multiple Complications for Risk Profiling in Diabetes Care

    Full text link
    Type 2 diabetes mellitus (T2DM) is a chronic disease that often results in multiple complications. Risk prediction and profiling of T2DM complications is critical for healthcare professionals to design personalized treatment plans for patients in diabetes care for improved outcomes. In this paper, we study the risk of developing complications after the initial T2DM diagnosis from longitudinal patient records. We propose a novel multi-task learning approach to simultaneously model multiple complications where each task corresponds to the risk modeling of one complication. Specifically, the proposed method strategically captures the relationships (1) between the risks of multiple T2DM complications, (2) between the different risk factors, and (3) between the risk factor selection patterns. The method uses coefficient shrinkage to identify an informative subset of risk factors from high-dimensional data, and uses a hierarchical Bayesian framework to allow domain knowledge to be incorporated as priors. The proposed method is favorable for healthcare applications because in additional to improved prediction performance, relationships among the different risks and risk factors are also identified. Extensive experimental results on a large electronic medical claims database show that the proposed method outperforms state-of-the-art models by a significant margin. Furthermore, we show that the risk associations learned and the risk factors identified lead to meaningful clinical insights

    Diabetes Mellitus Forecasting Using Population Health Data in Ontario, Canada

    Full text link
    Leveraging health administrative data (HAD) datasets for predicting the risk of chronic diseases including diabetes has gained a lot of attention in the machine learning community recently. In this paper, we use the largest health records datasets of patients in Ontario,Canada. Provided by the Institute of Clinical Evaluative Sciences (ICES), this database is age, gender and ethnicity-diverse. The datasets include demographics, lab measurements,drug benefits, healthcare system interactions, ambulatory and hospitalizations records. We perform one of the first large-scale machine learning studies with this data to study the task of predicting diabetes in a range of 1-10 years ahead, which requires no additional screening of individuals.In the best setup, we reach a test AUC of 80.3 with a single-model trained on an observation window of 5 years with a one-year buffer using all datasets. A subset of top 15 features alone (out of a total of 963) could provide a test AUC of 79.1. In this paper, we provide extensive machine learning model performance and feature contribution analysis, which enables us to narrow down to the most important features useful for diabetes forecasting. Examples include chronic conditions such as asthma and hypertension, lab results, diagnostic codes in insurance claims, age and geographical information.Comment: 18 pages, 3 figures, 8 Tables, Submitted to 2019 ML for Healthcare conferenc

    Boosting Deep Learning Risk Prediction with Generative Adversarial Networks for Electronic Health Records

    Full text link
    The rapid growth of Electronic Health Records (EHRs), as well as the accompanied opportunities in Data-Driven Healthcare (DDH), has been attracting widespread interests and attentions. Recent progress in the design and applications of deep learning methods has shown promising results and is forcing massive changes in healthcare academia and industry, but most of these methods rely on massive labeled data. In this work, we propose a general deep learning framework which is able to boost risk prediction performance with limited EHR data. Our model takes a modified generative adversarial network namely ehrGAN, which can provide plausible labeled EHR data by mimicking real patient records, to augment the training dataset in a semi-supervised learning manner. We use this generative model together with a convolutional neural network (CNN) based prediction model to improve the onset prediction performance. Experiments on two real healthcare datasets demonstrate that our proposed framework produces realistic data samples and achieves significant improvements on classification tasks with the generated data over several stat-of-the-art baselines.Comment: To appear in ICDM 2017. This is the full version of paper with 8 page

    Recurrent Neural Networks based Obesity Status Prediction Using Activity Data

    Full text link
    Obesity is a serious public health concern world-wide, which increases the risk of many diseases, including hypertension, stroke, and type 2 diabetes. To tackle this problem, researchers across the health ecosystem are collecting diverse types of data, which includes biomedical, behavioral and activity, and utilizing machine learning techniques to mine hidden patterns for obesity status improvement prediction. While existing machine learning methods such as Recurrent Neural Networks (RNNs) can provide exceptional results, it is challenging to discover hidden patterns of the sequential data due to the irregular observation time instances. Meanwhile, the lack of understanding of why those learning models are effective also limits further improvements on their architectures. Thus, in this work, we develop a RNN based time-aware architecture to tackle the challenging problem of handling irregular observation times and relevant feature extractions from longitudinal patient records for obesity status improvement prediction. To improve the prediction performance, we train our model using two data sources: (i) electronic medical records containing information regarding lab tests, diagnoses, and demographics; (ii) continuous activity data collected from popular wearables. Evaluations of real-world data demonstrate that our proposed method can capture the underlying structures in users' time sequences with irregularities, and achieve an accuracy of 77-86% in predicting the obesity status improvement.Comment: 8 pages, 6 figures, ICMLA 2018 conferenc
    corecore