1,545 research outputs found

    A Powerful Paradigm for Cardiovascular Risk Stratification Using Multiclass, Multi-Label, and Ensemble-Based Machine Learning Paradigms: A Narrative Review

    Get PDF
    Background and Motivation: Cardiovascular disease (CVD) causes the highest mortality globally. With escalating healthcare costs, early non-invasive CVD risk assessment is vital. Conventional methods have shown poor performance compared to more recent and fast-evolving Artificial Intelligence (AI) methods. The proposed study reviews the three most recent paradigms for CVD risk assessment, namely multiclass, multi-label, and ensemble-based methods in (i) office-based and (ii) stress-test laboratories. Methods: A total of 265 CVD-based studies were selected using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) model. Due to its popularity and recent development, the study analyzed the above three paradigms using machine learning (ML) frameworks. We review comprehensively these three methods using attributes, such as architecture, applications, pro-and-cons, scientific validation, clinical evaluation, and AI risk-of-bias (RoB) in the CVD framework. These ML techniques were then extended under mobile and cloud-based infrastructure. Findings: Most popular biomarkers used were office-based, laboratory-based, image-based phenotypes, and medication usage. Surrogate carotid scanning for coronary artery risk prediction had shown promising results. Ground truth (GT) selection for AI-based training along with scientific and clinical validation is very important for CVD stratification to avoid RoB. It was observed that the most popular classification paradigm is multiclass followed by the ensemble, and multi-label. The use of deep learning techniques in CVD risk stratification is in a very early stage of development. Mobile and cloud-based AI technologies are more likely to be the future. Conclusions: AI-based methods for CVD risk assessment are most promising and successful. Choice of GT is most vital in AI-based models to prevent the RoB. The amalgamation of image-based strategies with conventional risk factors provides the highest stability when using the three CVD paradigms in non-cloud and cloud-based frameworks

    Learning Better Clinical Risk Models.

    Full text link
    Risk models are used to estimate a patient’s risk of suffering particular outcomes throughout clinical practice. These models are important for matching patients to the appropriate level of treatment, for effective allocation of resources, and for fairly evaluating the performance of healthcare providers. The application and development of methods from the field of machine learning has the potential to improve patient outcomes and reduce healthcare spending with more accurate estimates of patient risk. This dissertation addresses several limitations of currently used clinical risk models, through the identification of novel risk factors and through the training of more effective models. As wearable monitors become more effective and less costly, the previously untapped predictive information in a patient’s physiology over time has the potential to greatly improve clinical practice. However translating these technological advances into real-world clinical impacts will require computational methods to identify high-risk structure in the data. This dissertation presents several approaches to learning risk factors from physiological recordings, through the discovery of latent states using topic models, and through the identification of predictive features using convolutional neural networks. We evaluate these approaches on patients from a large clinical trial and find that these methods not only outperform prior approaches to leveraging heart rate for cardiac risk stratification, but that they improve overall prediction of cardiac death when considered alongside standard clinical risk factors. We also demonstrate the utility of this work for learning a richer description of sleep recordings. Additionally, we consider the development of risk models in the presence of missing data, which is ubiquitous in real-world medical settings. We present a novel method for jointly learning risk and imputation models in the presence of missing data, and find significant improvements relative to standard approaches when evaluated on a large national registry of trauma patients.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113326/1/alexve_1.pd

    Current and Future Use of Artificial Intelligence in Electrocardiography.

    Get PDF
    Artificial intelligence (AI) is increasingly used in electrocardiography (ECG) to assist in diagnosis, stratification, and management. AI algorithms can help clinicians in the following areas: (1) interpretation and detection of arrhythmias, ST-segment changes, QT prolongation, and other ECG abnormalities; (2) risk prediction integrated with or without clinical variables (to predict arrhythmias, sudden cardiac death, stroke, and other cardiovascular events); (3) monitoring ECG signals from cardiac implantable electronic devices and wearable devices in real time and alerting clinicians or patients when significant changes occur according to timing, duration, and situation; (4) signal processing, improving ECG quality and accuracy by removing noise/artifacts/interference, and extracting features not visible to the human eye (heart rate variability, beat-to-beat intervals, wavelet transforms, sample-level resolution, etc.); (5) therapy guidance, assisting in patient selection, optimizing treatments, improving symptom-to-treatment times, and cost effectiveness (earlier activation of code infarction in patients with ST-segment elevation, predicting the response to antiarrhythmic drugs or cardiac implantable devices therapies, reducing the risk of cardiac toxicity, etc.); (6) facilitating the integration of ECG data with other modalities (imaging, genomics, proteomics, biomarkers, etc.). In the future, AI is expected to play an increasingly important role in ECG diagnosis and management, as more data become available and more sophisticated algorithms are developed.Manuel Marina-Breysse has received funding from European Union’s Horizon 2020 research and innovation program under the grant agreement number 965286; Machine Learning and Artificial Intelligence for Early Detection of Stroke and Atrial Fibrillation, MAESTRIA Consortium; and EIT Health, a body of the European Union.S

    Early hospital mortality prediction using vital signals

    Full text link
    Early hospital mortality prediction is critical as intensivists strive to make efficient medical decisions about the severely ill patients staying in intensive care units. As a result, various methods have been developed to address this problem based on clinical records. However, some of the laboratory test results are time-consuming and need to be processed. In this paper, we propose a novel method to predict mortality using features extracted from the heart signals of patients within the first hour of ICU admission. In order to predict the risk, quantitative features have been computed based on the heart rate signals of ICU patients. Each signal is described in terms of 12 statistical and signal-based features. The extracted features are fed into eight classifiers: decision tree, linear discriminant, logistic regression, support vector machine (SVM), random forest, boosted trees, Gaussian SVM, and K-nearest neighborhood (K-NN). To derive insight into the performance of the proposed method, several experiments have been conducted using the well-known clinical dataset named Medical Information Mart for Intensive Care III (MIMIC-III). The experimental results demonstrate the capability of the proposed method in terms of precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC). The decision tree classifier satisfies both accuracy and interpretability better than the other classifiers, producing an F1-score and AUC equal to 0.91 and 0.93, respectively. It indicates that heart rate signals can be used for predicting mortality in patients in the ICU, achieving a comparable performance with existing predictions that rely on high dimensional features from clinical records which need to be processed and may contain missing information.Comment: 11 pages, 5 figures, preprint of accepted paper in IEEE&ACM CHASE 2018 and published in Smart Health journa

    Time series kernel similarities for predicting Paroxysmal Atrial Fibrillation from ECGs

    Get PDF
    We tackle the problem of classifying Electrocardiography (ECG) signals with the aim of predicting the onset of Paroxysmal Atrial Fibrillation (PAF). Atrial fibrillation is the most common type of arrhythmia, but in many cases PAF episodes are asymptomatic. Therefore, in order to help diagnosing PAF, it is important to design procedures for detecting and, more importantly, predicting PAF episodes. We propose a method for predicting PAF events whose first step consists of a feature extraction procedure that represents each ECG as a multi-variate time series. Successively, we design a classification framework based on kernel similarities for multi-variate time series, capable of handling missing data. We consider different approaches to perform classification in the original space of the multi-variate time series and in an embedding space, defined by the kernel similarity measure. We achieve a classification accuracy comparable with state of the art methods, with the additional advantage of detecting the PAF onset up to 15 minutes in advance

    Unsupervised Similarity-Based Risk Stratification for Cardiovascular Events Using Long-Term Time-Series Data

    Get PDF
    In medicine, one often bases decisions upon a comparative analysis of patient data. In this paper, we build upon this observation and describe similarity-based algorithms to risk stratify patients for major adverse cardiac events. We evolve the traditional approach of comparing patient data in two ways. First, we propose similarity-based algorithms that compare patients in terms of their long-term physiological monitoring data. Symbolic mismatch identifies functional units in long-term signals and measures changes in the morphology and frequency of these units across patients. Second, we describe similarity-based algorithms that are unsupervised and do not require comparisons to patients with known outcomes for risk stratification. This is achieved by using an anomaly detection framework to identify patients who are unlike other patients in a population and may potentially be at an elevated risk. We demonstrate the potential utility of our approach by showing how symbolic mismatch-based algorithms can be used to classify patients as being at high or low risk of major adverse cardiac events by comparing their long-term electrocardiograms to that of a large population. We describe how symbolic mismatch can be used in three different existing methods: one-class support vector machines, nearest neighbor analysis, and hierarchical clustering. When evaluated on a population of 686 patients with available long-term electrocardiographic data, symbolic mismatch-based comparative approaches were able to identify patients at roughly a two-fold increased risk of major adverse cardiac events in the 90 days following acute coronary syndrome. These results were consistent even after adjusting for other clinical risk variables.National Science Foundation (U.S.) (CAREER award 1054419

    Heart Disease Detection using Vision-Based Transformer Models from ECG Images

    Full text link
    Heart disease, also known as cardiovascular disease, is a prevalent and critical medical condition characterized by the impairment of the heart and blood vessels, leading to various complications such as coronary artery disease, heart failure, and myocardial infarction. The timely and accurate detection of heart disease is of paramount importance in clinical practice. Early identification of individuals at risk enables proactive interventions, preventive measures, and personalized treatment strategies to mitigate the progression of the disease and reduce adverse outcomes. In recent years, the field of heart disease detection has witnessed notable advancements due to the integration of sophisticated technologies and computational approaches. These include machine learning algorithms, data mining techniques, and predictive modeling frameworks that leverage vast amounts of clinical and physiological data to improve diagnostic accuracy and risk stratification. In this work, we propose to detect heart disease from ECG images using cutting-edge technologies, namely vision transformer models. These models are Google-Vit, Microsoft-Beit, and Swin-Tiny. To the best of our knowledge, this is the initial endeavor concentrating on the detection of heart diseases through image-based ECG data by employing cuttingedge technologies namely, transformer models. To demonstrate the contribution of the proposed framework, the performance of vision transformer models are compared with state-of-the-art studies. Experiment results show that the proposed framework exhibits remarkable classification results

    A semiparametric bivariate probit model for joint modeling of outcomes in STEMI patients

    Get PDF
    In this work we analyse the relationship among in-hospital mortality and a treatment effectiveness outcome in patients affected by ST-Elevation myocardial infarction. The main idea is to carry out a joint modeling of the two outcomes applying a Semiparametric Bivariate Probit Model to data arising from a clinical registry called STEMI Archive. A realistic quantification of the relationship between outcomes can be problematic for several reasons. First, latent factors associated with hospitals organization can affect the treatment efficacy and/or interact with patient’s condition at admission time. Moreover, they can also directly influence the mortality outcome. Such factors can be hardly measurable. Thus, the use of classical estimation methods will clearly result in inconsistent or biased parameter estimates. Secondly, covariate-outcomes relationships can exhibit nonlinear patterns. Provided that proper statistical methods for model fitting in such framework are available, it is possible to employ a simultaneous estimation approach to account for unobservable confounders. Such a framework can also provide flexible covariate structures and model the whole conditional distribution of the response
    • …
    corecore