208 research outputs found

    Prediction of Sudden Cardiac Death Using Ensemble Classifiers

    Get PDF
    Sudden Cardiac Death (SCD) is a medical problem that is responsible for over 300,000 deaths per year in the United States and millions worldwide. SCD is defined as death occurring from within one hour of the onset of acute symptoms, an unwitnessed death in the absence of pre-existing progressive circulatory failures or other causes of deaths, or death during attempted resuscitation. Sudden death due to cardiac reasons is a leading cause of death among Congestive Heart Failure (CHF) patients. The use of Electronic Medical Records (EMR) systems has made a wealth of medical data available for research and analysis. Supervised machine learning methods have been successfully used for medical diagnosis. Ensemble classifiers are known to achieve better prediction accuracy than its constituent base classifiers. In an effort to understand the factors contributing to SCD, data on 2,521 patients were collected for the Sudden Cardiac Death in Heart Failure Trial (SCD-HeFT). The data included 96 features that were gathered over a period of 5 years. The goal of this dissertation was to develop a model that could accurately predict SCD based on available features. The prediction model used the Cox proportional hazards model as a score and then used the ExtraTreesClassifier algorithm as a boosting mechanism to create the ensemble. We tested the system at prediction points of 180 days and 365 days. Our best results were at 180-days with accuracy of 0.9624, specificity of 0.9915, and F1 score of 0.9607

    Scar conducting channel wall thickness characterization to predict arrhythmogenicity during ventricular tachycardia ablation

    Get PDF
    Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2020-2021. Tutora: Paz Garre Anguera de Sojo.The obtention of cardiac images before the surgery ablation of ventricular tachycardia is widely used to obtain more and better information from the patient than the information obtained during the procedure. This technique is commonly performed using cardiac magnetic resonance since it allows to study and characterise the tissue, which is crucial to detect quantify scarred tissue and the particular region that triggers the tachycardia. In this project, the arrhythmogenicity of different conducting channels from patients subjected to ventricular tachycardia ablation has been studied along with their wall thickness in order to assess a correlation using late gadolinium enhancement cardiac magnetic resonance imaging. In addition, the correlation between the left ventricle wall thickness of the conducting channels and the outcome of the cardiac catheter ablation performed from the endocardial region of the heart has also been studied. This project emerges from a previous study performed in the Hospital Clínic de Barcelona that characterized several features of the main conducting channel that triggers the ventricular tachycardia. To perform this study, the images used and the information regarding the arrhythmogenic conducting channel of every patient were obtained from the previous research, using 26 patients for the main objective of this project and using 10 of them for the study of the outcome of the ventricular tachycardia ablation The study of the wall thickness and the visualization of the conducting channels were performed using ADAS 3D software. Results showed that there was not a significative difference between the wall thickness from arrhythmogenic conducting channels and from the non-arrhythmogenic conducting channels within the patients studied but it is important to highlight that the p-value obtained was too large, which might have been caused by the lack of patients to include to this study. However, an interesting distribution of the arrhythmogenic conducting channel was noticed in the inferior-septum region of the heart, which is interesting to study further in the future using more patients and, hence, more conducting channels to study. To conclude, it is important to highlight the role of technology and biomedical engineering in this field to achieve better image acquisition to improve therapeutical techniques for the patient and this project has contributed to the awareness and the comprehension of the role of a biomedical engineer in a clinical environment

    A Pharmaceutical Paradigm for Cardiovascular Composite Risk Assessment Using Novel Radiogenomics Risk Predictors in Precision Explainable Artificial Intelligence Framework: Clinical Trial Tool

    Get PDF
    Cardiovascular disease (CVD) is challenging to diagnose and treat since symptoms appear late during the progression of atherosclerosis. Conventional risk factors alone are not always sufficient to properly categorize at-risk patients, and clinical risk scores are inadequate in predicting cardiac events. Integrating genomic-based biomarkers (GBBM) found in plasma/serum samples with novel non-invasive radiomics-based biomarkers (RBBM) such as plaque area, plaque burden, and maximum plaque height can improve composite CVD risk prediction in the pharmaceutical paradigm. These biomarkers consider several pathways involved in the pathophysiology of atherosclerosis disease leading to CVD.This review proposes two hypotheses: (i) The composite biomarkers are strongly correlated and can be used to detect the severity of CVD/Stroke precisely, and (ii) an explainable artificial intelligence (XAI)-based composite risk CVD/Stroke model with survival analysis using deep learning (DL) can predict in preventive, precision, and personalized (aiP3) framework benefiting the pharmaceutical paradigm.The PRISMA search technique resulted in 214 studies assessing composite biomarkers using radiogenomics for CVD/Stroke. The study presents a XAI model using AtheroEdgeTM 4.0 to determine the risk of CVD/Stroke in the pharmaceutical framework using the radiogenomics biomarkers.Our observations suggest that the composite CVD risk biomarkers using radiogenomics provide a new dimension to CVD/Stroke risk assessment. The proposed review suggests a unique, unbiased, and XAI model based on AtheroEdgeTM 4.0 that can predict the composite risk of CVD/Stroke using radiogenomics in the pharmaceutical paradigm

    A Pharmaceutical Paradigm for Cardiovascular Composite Risk Assessment Using Novel Radiogenomics Risk Predictors in Precision Explainable Artificial Intelligence Framework: Clinical Trial Tool

    Get PDF
    Background: Cardiovascular disease (CVD) is challenging to diagnose and treat since symptoms appear late during the progression of atherosclerosis. Conventional risk factors alone are not always sufficient to properly categorize at-risk patients, and clinical risk scores are inadequate in predicting cardiac events. Integrating genomic-based biomarkers (GBBM) found in plasma/serum samples with novel non-invasive radiomics-based biomarkers (RBBM) such as plaque area, plaque burden, and maximum plaque height can improve composite CVD risk prediction in the pharmaceutical paradigm. These biomarkers consider several pathways involved in the pathophysiology of atherosclerosis disease leading to CVD. Objective: This review proposes two hypotheses: (i) The composite biomarkers are strongly correlated and can be used to detect the severity of CVD/Stroke precisely, and (ii) an explainable artificial intelligence (XAI)-based composite risk CVD/Stroke model with survival analysis using deep learning (DL) can predict in preventive, precision, and personalized (aiP 3 ) framework benefiting the pharmaceutical paradigm. Method: The PRISMA search technique resulted in 214 studies assessing composite biomarkers using radiogenomics for CVD/Stroke. The study presents a XAI model using AtheroEdge TM 4.0 to determine the risk of CVD/Stroke in the pharmaceutical framework using the radiogenomics biomarkers. Conclusions: Our observations suggest that the composite CVD risk biomarkers using radiogenomics provide a new dimension to CVD/Stroke risk assessment. The proposed review suggests a unique, unbiased, and XAI model based on AtheroEdge TM 4.0 that can predict the composite risk of CVD/Stroke using radiogenomics in the pharmaceutical paradigm

    Algorithms for automated diagnosis of cardiovascular diseases based on ECG data: A comprehensive systematic review

    Get PDF
    The prevalence of cardiovascular diseases is increasing around the world. However, the technology is evolving and can be monitored with low-cost sensors anywhere at any time. This subject is being researched, and different methods can automatically identify these diseases, helping patients and healthcare professionals with the treatments. This paper presents a systematic review of disease identification, classification, and recognition with ECG sensors. The review was focused on studies published between 2017 and 2022 in different scientific databases, including PubMed Central, Springer, Elsevier, Multidisciplinary Digital Publishing Institute (MDPI), IEEE Xplore, and Frontiers. It results in the quantitative and qualitative analysis of 103 scientific papers. The study demonstrated that different datasets are available online with data related to various diseases. Several ML/DP-based models were identified in the research, where Convolutional Neural Network and Support Vector Machine were the most applied algorithms. This review can allow us to identify the techniques that can be used in a system that promotes the patient’s autonomy.N/

    A design science framework for research in health analytics

    Get PDF
    Data analytics provide the ability to systematically identify patterns and insights from a variety of data as organizations pursue improvements in their processes, products, and services. Analytics can be classified based on their ability to: explore, explain, predict, and prescribe. When applied to the field of healthcare, analytics presents a new frontier for business intelligence. In 2013 alone, the Centers for Medicare and Medicaid Services (CMS) reported that the national health expenditure was $2.9 trillion, representing 17.4% of the total United States GDP. The Patient Protection and Affordable Care Act of 2010 (ACA) requires all hospitals to implement electronic medical record (EMR) technologies by year 2014 (Patient Protection and Affordable Care Act, 2010). Moreover, the ACA makes healthcare process and outcomes more transparent by making related data readily available for research. Enterprising organizations are employing analytics and analytical techniques to find patterns in healthcare data (I. R. Bardhan & Thouin, 2013; Hansen, Miron-Shatz, Lau, & Paton, 2014). The goal is to assess the cost and quality of care and identify opportunities for improvement for organizations as well as the healthcare system as a whole. Yet, there remains a need for research to systematically understand, explain, and predict the sources and impacts of the widely observed variance in the cost and quality of care available. This is a driving motivation for research in healthcare. This dissertation conducts a design theoretic examination of the application of advanced data analytics in healthcare. Heart Failure is the number one cause of death and the biggest contributor healthcare costs in the United States. An exploratory examination of the application of predictive analytics is conducted in order to understand the cost and quality of care provided to heart failure patients. The specific research question is addressed: How can we improve and expand upon our understanding of the variances in the cost of care and the quality of care for heart failure? Using state level data from the State Health Plan of North Carolina, a standard readmission model was assessed as a baseline measure for prediction, and advanced analytics were compared to this baseline. This dissertation demonstrates that advanced analytics can improve readmission predictions as well as expand understanding of the profile of a patient readmitted for heart failure. Implications are assessed for academics and practitioners

    Novel Approaches to Pervasive and Remote Sensing in Cardiovascular Disease Assessment

    Get PDF
    Cardiovascular diseases (CVDs) are the leading cause of death worldwide, responsible for 45% of all deaths. Nevertheless, their mortality is decreasing in the last decade due to better prevention, diagnosis, and treatment resources. An important medical instrument for the latter processes is the Electrocardiogram (ECG). The ECG is a versatile technique used worldwide for its ease of use, low cost, and accessibility, having evolved from devices that filled up a room, to small patches or wrist- worn devices. Such evolution allowed for more pervasive and near-continuous recordings. The analysis of an ECG allows for studying the functioning of other physiological systems of the body. One such is the Autonomic Nervous System (ANS), responsible for controlling key bodily functions. The ANS can be studied by analyzing the characteristic inter-beat variations, known as Heart Rate Variability (HRV). Leveraging this relation, a pilot study was developed, where HRV was used to quantify the contribution of the ANS in modulating cardioprotection offered by an experimental medical procedure called Remote Ischemic Conditioning (RIC), offering a more objective perspective. To record an ECG, electrodes are responsible for converting the ion-propagated action potential to electrons, needed to record it. They are produced from different materials, including metal, carbon-based, or polymers. Also, they can be divided into wet (if an elec- trolyte gel is used) or dry (if no added electrolyte is used). Electrodes can be positioned either inside the body (in-the-person), attached to the skin (on-the-body), or embedded in daily life objects (off-the-person), with the latter allowing for more pervasive recordings. To this effect, a novel mobile acquisition device for recording ECG rhythm strips was developed, where polymer-based embedded electrodes are used to record ECG signals similar to a medical-grade device. One drawback of off-the-person solutions is the increased noise, mainly caused by the intermittent contact with the recording surfaces. A new signal quality metric was developed based on delayed phase mapping, a technique that maps time series to a two-dimensional space, which is then used to classify a segment into good or noisy. Two different approaches were developed, one using a popular image descriptor, the Hu image moments; and the other using a Convolutional Neural Network, both with promising results for their usage as signal quality index classifiers.As doenças cardiovasculares (DCVs) são a principal causa de morte no mundo, res- ponsáveis por 45% de todas estas. No entanto, a sua mortalidade tem vindo a diminuir na última década, devido a melhores recursos na prevenção, diagnóstico e tratamento. Um instrumento médico importante para estes recursos é o Eletrocardiograma (ECG). O ECG é uma técnica versátil utilizada em todo o mundo pela sua facilidade de uso, baixo custo e acessibilidade, tendo evoluído de dispositivos que ocupavam uma sala inteira para pequenos adesivos ou dispositivos de pulso. Tal evolução permitiu aquisições mais pervasivas e quase contínuas. A análise de um ECG permite estudar o funcionamento de outros sistemas fisiológi- cos do corpo. Um deles é o Sistema Nervoso Autônomo (SNA), responsável por controlar as principais funções corporais. O SNA pode ser estudado analisando as variações inter- batidas, conhecidas como Variabilidade da Frequência Cardíaca (VFC). Aproveitando essa relação, foi desenvolvido um estudo piloto, onde a VFC foi utilizada para quantificar a contribuição do SNA na modulação da cardioproteção oferecida por um procedimento mé- dico experimental, denominado Condicionamento Isquêmico Remoto (CIR), oferecendo uma perspectiva mais objetiva. Na aquisição de um ECG, os elétrodos são os responsáveis por converter o potencial de ação propagado por iões em eletrões, necessários para a sua recolha. Estes podem ser produzidos a partir de diferentes materiais, incluindo metal, à base de carbono ou polímeros. Além disso, os elétrodos podem ser classificados em húmidos (se for usado um gel eletrolítico) ou secos (se não for usado um eletrólito adicional). Os elétrodos podem ser posicionados dentro do corpo (dentro-da-pessoa), colocados em contacto com a pele (na-pessoa) ou embutidos em objetos da vida quotidiana (fora-da-pessoa), sendo que este último permite gravações mais pervasivas . Para este efeito, foi desenvolvido um novo dispositivo de aquisição móvel para gravar sinal de ECG, onde elétrodos embutidos à base de polímeros são usados para recolher sinais de ECG semelhantes a um dispositivo de grau médico. Uma desvantagem das soluções onde os elétrodos estão embutidos é o aumento do ruído, causado principalmente pelo contato intermitente com as superfícies de aquisição. Uma nova métrica de qualidade de sinal foi desenvolvida com base no mapeamento de fase atrasada, uma técnica que mapeia séries temporais para um espaço bidimensional, que é então usado para classificar um segmento em bom ou ruidoso. Duas abordagens diferentes foram desenvolvidas, uma usando um popular descritor de imagem, e outra utilizando uma Rede Neural Convolucional, com resultados promissores para o seu uso como classificadores de qualidade de sinal

    Computational Methods for Segmentation of Multi-Modal Multi-Dimensional Cardiac Images

    Get PDF
    Segmentation of the heart structures helps compute the cardiac contractile function quantified via the systolic and diastolic volumes, ejection fraction, and myocardial mass, representing a reliable diagnostic value. Similarly, quantification of the myocardial mechanics throughout the cardiac cycle, analysis of the activation patterns in the heart via electrocardiography (ECG) signals, serve as good cardiac diagnosis indicators. Furthermore, high quality anatomical models of the heart can be used in planning and guidance of minimally invasive interventions under the assistance of image guidance. The most crucial step for the above mentioned applications is to segment the ventricles and myocardium from the acquired cardiac image data. Although the manual delineation of the heart structures is deemed as the gold-standard approach, it requires significant time and effort, and is highly susceptible to inter- and intra-observer variability. These limitations suggest a need for fast, robust, and accurate semi- or fully-automatic segmentation algorithms. However, the complex motion and anatomy of the heart, indistinct borders due to blood flow, the presence of trabeculations, intensity inhomogeneity, and various other imaging artifacts, makes the segmentation task challenging. In this work, we present and evaluate segmentation algorithms for multi-modal, multi-dimensional cardiac image datasets. Firstly, we segment the left ventricle (LV) blood-pool from a tri-plane 2D+time trans-esophageal (TEE) ultrasound acquisition using local phase based filtering and graph-cut technique, propagate the segmentation throughout the cardiac cycle using non-rigid registration-based motion extraction, and reconstruct the 3D LV geometry. Secondly, we segment the LV blood-pool and myocardium from an open-source 4D cardiac cine Magnetic Resonance Imaging (MRI) dataset by incorporating average atlas based shape constraint into the graph-cut framework and iterative segmentation refinement. The developed fast and robust framework is further extended to perform right ventricle (RV) blood-pool segmentation from a different open-source 4D cardiac cine MRI dataset. Next, we employ convolutional neural network based multi-task learning framework to segment the myocardium and regress its area, simultaneously, and show that segmentation based computation of the myocardial area is significantly better than that regressed directly from the network, while also being more interpretable. Finally, we impose a weak shape constraint via multi-task learning framework in a fully convolutional network and show improved segmentation performance for LV, RV and myocardium across healthy and pathological cases, as well as, in the challenging apical and basal slices in two open-source 4D cardiac cine MRI datasets. We demonstrate the accuracy and robustness of the proposed segmentation methods by comparing the obtained results against the provided gold-standard manual segmentations, as well as with other competing segmentation methods

    Contribuciones de las técnicas machine learning a la cardiología. Predicción de reestenosis tras implante de stent coronario

    Get PDF
    [ES]Antecedentes: Existen pocos temas de actualidad equiparables a la posibilidad de la tecnología actual para desarrollar las mismas capacidades que el ser humano, incluso en medicina. Esta capacidad de simular los procesos de inteligencia humana por parte de máquinas o sistemas informáticos es lo que conocemos hoy en día como inteligencia artificial. Uno de los campos de la inteligencia artificial con mayor aplicación a día de hoy en medicina es el de la predicción, recomendación o diagnóstico, donde se aplican las técnicas machine learning. Asimismo, existe un creciente interés en las técnicas de medicina de precisión, donde las técnicas machine learning pueden ofrecer atención médica individualizada a cada paciente. El intervencionismo coronario percutáneo (ICP) con stent se ha convertido en una práctica habitual en la revascularización de los vasos coronarios con enfermedad aterosclerótica obstructiva significativa. El ICP es asimismo patrón oro de tratamiento en pacientes con infarto agudo de miocardio; reduciendo las tasas de muerte e isquemia recurrente en comparación con el tratamiento médico. El éxito a largo plazo del procedimiento está limitado por la reestenosis del stent, un proceso patológico que provoca un estrechamiento arterial recurrente en el sitio de la ICP. Identificar qué pacientes harán reestenosis es un desafío clínico importante; ya que puede manifestarse como un nuevo infarto agudo de miocardio o forzar una nueva resvascularización del vaso afectado, y que en casos de reestenosis recurrente representa un reto terapéutico. Objetivos: Después de realizar una revisión de las técnicas de inteligencia artificial aplicadas a la medicina y con mayor profundidad, de las técnicas machine learning aplicadas a la cardiología, el objetivo principal de esta tesis doctoral ha sido desarrollar un modelo machine learning para predecir la aparición de reestenosis en pacientes con infarto agudo de miocardio sometidos a ICP con implante de un stent. Asimismo, han sido objetivos secundarios comparar el modelo desarrollado con machine learning con los scores clásicos de riesgo de reestenosis utilizados hasta la fecha; y desarrollar un software que permita trasladar esta contribución a la práctica clínica diaria de forma sencilla. Para desarrollar un modelo fácilmente aplicable, realizamos nuestras predicciones sin variables adicionales a las obtenidas en la práctica rutinaria. Material: El conjunto de datos, obtenido del ensayo GRACIA-3, consistió en 263 pacientes con características demográficas, clínicas y angiográficas; 23 de ellos presentaron reestenosis a los 12 meses después de la implantación del stent. Todos los desarrollos llevados a cabo se han hecho en Python y se ha utilizado computación en la nube, en concreto AWS (Amazon Web Services). Metodología: Se ha utilizado una metodología para trabajar con conjuntos de datos pequeños y no balanceados, siendo importante el esquema de validación cruzada anidada utilizado, así como la utilización de las curvas PR (precision-recall, exhaustividad-sensibilidad), además de las curvas ROC, para la interpretación de los modelos. Se han entrenado los algoritmos más habituales en la literatura para elegir el que mejor comportamiento ha presentado. Resultados: El modelo con mejores resultados ha sido el desarrollado con un clasificador extremely randomized trees; que superó significativamente (0,77; área bajo la curva ROC a los tres scores clínicos clásicos; PRESTO-1 (0,58), PRESTO-2 (0,58) y TLR (0,62). Las curvas exhaustividad sensibilidad ofrecieron una imagen más precisa del rendimiento del modelo extremely randomized trees que muestra un algoritmo eficiente (0,96) para no reestenosis, con alta exhaustividad y alta sensibilidad. Para un umbral considerado óptimo, de 1,000 pacientes sometidos a implante de stent, nuestro modelo machine learning predeciría correctamente 181 (18%) más casos en comparación con el mejor score de riesgo clásico (TLR). Las variables más importantes clasificadas según su contribución a las predicciones fueron diabetes, enfermedad coronaria en 2 ó más vasos, flujo TIMI post-ICP, plaquetas anormales, trombo post-ICP y colesterol anormal. Finalmente, se ha desarrollado una calculadora para trasladar el modelo a la práctica clínica. La calculadora permite estimar el riesgo individual de cada paciente y situarlo en una zona de riesgo, facilitando la toma de decisión al médico en cuanto al seguimiento adecuado para el mismo. Conclusiones: Aplicado inmediatamente después de la implantación del stent, un modelo machine learning diferencia mejor a aquellos pacientes que presentarán o no reestenosis respecto a los discriminadores clásicos actuales

    Investigating the Process of Developing a KDD Model for the Classification of Cases with Cardiovascular Disease Based on a Canadian Database

    Get PDF
    Medicine and health domains are information intensive fields as data volume has been increasing constantly from them. In order to make full use of the data, the technique of Knowledge Discovery in Databases (KDD) has been developed as a comprehensive pathway to discover valid and unsuspected patterns and trends that are both understandable and useful to data analysts. The present study aimed to investigate the entire KDD process of developing a classification model for cardiovascular disease (CVD) from a Canadian dataset for the first time. The research data source was Canadian Heart Health Database, which contains 265 easily collected variables and 23,129 instances from ten Canadian provinces. Many practical issues involving in different steps of the integrated process were addressed, and possible solutions were suggested based on the experimental results. Five specific learning schemes representing five distinct KDD approaches were employed, as they were never compared with one another. In addition, two improving approaches including cost-sensitive learning and ensemble learning were also examined. The performance of developed models was measured in many aspects. The data set was prepared through data cleaning and missing value imputation. Three pairs of experiments demonstrated that the dataset balancing and outlier removal exerted positive influence to the classifier, but the variable normalization was not helpful. Three combinations of subset generation method and evaluation function were tested in variable subset selection phase, and the combination of Best-First search and Correlation-based Feature Selection showed comparable goodness and was maintained for other benefits. Among the five learning schemes investigated, C4.5 decision tree achieved the best performance on the classification of CVD, followed by Multilayer Feed-forward Network, KNearest Neighbor, Logistic Regression, and Naïve Bayes. Cost-sensitive learning exemplified by the MetaCost algorithm failed to outperform the single C4.5 decision tree when varying the cost matrix from 5:1 to 1:7. In contrast, the models developed from ensemble modeling, especially AdaBoost M1 algorithm, outperformed other models. Although the model with the best performance might be suitable for CVD screening in general Canadian population, it is not ready to use in practice. I propose some criteria to improve the further evaluation of the model. Finally, I describe some of the limitations of the study and propose potential solutions to address such limitations through out the KDD process. Such possibilities should be explored in further research
    corecore