725 research outputs found

    Computer aided diagnosis for cardiovascular diseases based on ECG signals : a survey

    Get PDF
    The interpretation of Electroencephalography (ECG) signals is difficult, because even subtle changes in the waveform can indicate a serious heart disease. Furthermore, these waveform changes might not be present all the time. As a consequence, it takes years of training for a medical practitioner to become an expert in ECG-based cardiovascular disease diagnosis. That training is a major investment in a specific skill. Even with expert ability, the signal interpretation takes time. In addition, human interpretation of ECG signals causes interoperator and intraoperator variability. ECG-based Computer-Aided Diagnosis (CAD) holds the promise of improving the diagnosis accuracy and reducing the cost. The same ECG signal will result in the same diagnosis support regardless of time and place. This paper introduces both the techniques used to realize the CAD functionality and the methods used to assess the established functionality. This survey aims to instill trust in CAD of cardiovascular diseases using ECG signals by introducing both a conceptional overview of the system and the necessary assessment method

    Early Detection and Continuous Monitoring of Atrial Fibrillation from ECG Signals with a Novel Beat-Wise Severity Ranking Approach

    Get PDF
    Irregularities in heartbeats and cardiac functioning outside of clinical settings are often not available to the clinicians, and thus ignored. But monitoring these with high-risk population might assist in early detection and continuous monitoring of Atrial Fibrillation(AF). Wearable devices like smart watches and wristbands, which can collect Electrocardigraph(ECG) signals, can monitor and warn users of unusual signs in a timely manner. Thus, there is a need to develop a real-time monitoring system for AF from ECG. We propose an algorithm for a simple beat-by-beat ECG signal multilevel classifier for AF detection and a quantitative severity scale (between 0 to 1) for user feedback. For this study, we used ECG recordings from MIT BIH Atrial Fibrillation, MIT BIH Long-term Atrial Fibrillation Database. All ECG signals are preprocessed for reducing noise using filter. Preprocessed signal is analyzed for extracting 39 features including 20 of amplitude type and 19 of interval type. The feature space for all ECG recordings is considered for Classification. Training and testing data include all classes of data i.e., beats to identify various episodes for severity. Feature space from the test data is fed to the classifier which determines the class label based on trained model. A class label is determined based on number of occurences of AF and other arrhythmia episodes such as AB(Atrial Bigeminy), SBR(Sinus Bradycardia), SVTA(Supra Ventricular Tacchyarrhythmia). Accuracy of 96.7764% is attained with Random Forest algorithm, Furthermore, precision and recall are determined based on correct and incorrect classifications for each class. Precision and recall on average of Random Forest Classifier are obtained as 0.968 and 0.968 respectievely. This work provides a novel approach to enhance existing method of AF detection by identifying heartbeat class and calculates a quantitative severity metric that might help in early detection and continuous monitoring of AF

    A Powerful Paradigm for Cardiovascular Risk Stratification Using Multiclass, Multi-Label, and Ensemble-Based Machine Learning Paradigms: A Narrative Review

    Get PDF
    Background and Motivation: Cardiovascular disease (CVD) causes the highest mortality globally. With escalating healthcare costs, early non-invasive CVD risk assessment is vital. Conventional methods have shown poor performance compared to more recent and fast-evolving Artificial Intelligence (AI) methods. The proposed study reviews the three most recent paradigms for CVD risk assessment, namely multiclass, multi-label, and ensemble-based methods in (i) office-based and (ii) stress-test laboratories. Methods: A total of 265 CVD-based studies were selected using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) model. Due to its popularity and recent development, the study analyzed the above three paradigms using machine learning (ML) frameworks. We review comprehensively these three methods using attributes, such as architecture, applications, pro-and-cons, scientific validation, clinical evaluation, and AI risk-of-bias (RoB) in the CVD framework. These ML techniques were then extended under mobile and cloud-based infrastructure. Findings: Most popular biomarkers used were office-based, laboratory-based, image-based phenotypes, and medication usage. Surrogate carotid scanning for coronary artery risk prediction had shown promising results. Ground truth (GT) selection for AI-based training along with scientific and clinical validation is very important for CVD stratification to avoid RoB. It was observed that the most popular classification paradigm is multiclass followed by the ensemble, and multi-label. The use of deep learning techniques in CVD risk stratification is in a very early stage of development. Mobile and cloud-based AI technologies are more likely to be the future. Conclusions: AI-based methods for CVD risk assessment are most promising and successful. Choice of GT is most vital in AI-based models to prevent the RoB. The amalgamation of image-based strategies with conventional risk factors provides the highest stability when using the three CVD paradigms in non-cloud and cloud-based frameworks

    Documenting and predicting topic changes in Computers in Biology and Medicine: A bibliometric keyword analysis from 1990 to 2017

    Get PDF
    The Computers in Biology and Medicine (CBM) journal promotes the use of com-puting machinery in the fields of bioscience and medicine. Since the first volume in 1970, the importance of computers in these fields has grown dramatically, this is evident in the diversification of topics and an increase in the publication rate. In this study, we quantify both change and diversification of topics covered in CBM. This is done by analysing the author supplied keywords, since they were electronically captured in 1990. The analysis starts by selecting 40 keywords, related to Medical (M) (7), Data (D)(10), Feature (F) (17) and Artificial Intelligence (AI) (6) methods. Automated keyword clustering shows the statistical connection between the selected keywords. We found that the three most popular topics in CBM are: Support Vector Machine (SVM), Elec-troencephalography (EEG) and IMAGE PROCESSING. In a separate analysis step, we bagged the selected keywords into sequential one year time slices and calculated the normalized appearance. The results were visualised with graphs that indicate the CBM topic changes. These graphs show that there was a transition from Artificial Neural Network (ANN) to SVM. In 2006 SVM replaced ANN as the most important AI algo-rithm. Our investigation helps the editorial board to manage and embrace topic change. Furthermore, our analysis is interesting for the general reader, as the results can help them to adjust their research directions

    Human-based approaches to pharmacology and cardiology: an interdisciplinary and intersectorial workshop.

    Get PDF
    Both biomedical research and clinical practice rely on complex datasets for the physiological and genetic characterization of human hearts in health and disease. Given the complexity and variety of approaches and recordings, there is now growing recognition of the need to embed computational methods in cardiovascular medicine and science for analysis, integration and prediction. This paper describes a Workshop on Computational Cardiovascular Science that created an international, interdisciplinary and inter-sectorial forum to define the next steps for a human-based approach to disease supported by computational methodologies. The main ideas highlighted were (i) a shift towards human-based methodologies, spurred by advances in new in silico, in vivo, in vitro, and ex vivo techniques and the increasing acknowledgement of the limitations of animal models. (ii) Computational approaches complement, expand, bridge, and integrate in vitro, in vivo, and ex vivo experimental and clinical data and methods, and as such they are an integral part of human-based methodologies in pharmacology and medicine. (iii) The effective implementation of multi- and interdisciplinary approaches, teams, and training combining and integrating computational methods with experimental and clinical approaches across academia, industry, and healthcare settings is a priority. (iv) The human-based cross-disciplinary approach requires experts in specific methodologies and domains, who also have the capacity to communicate and collaborate across disciplines and cross-sector environments. (v) This new translational domain for human-based cardiology and pharmacology requires new partnerships supported financially and institutionally across sectors. Institutional, organizational, and social barriers must be identified, understood and overcome in each specific setting

    Human-based approaches to pharmacology and cardiology: an interdisciplinary and intersectorial workshop

    Get PDF
    Both biomedical research and clinical practice rely on complex datasets for the physiological and genetic characterization of human hearts in health and disease. Given the complexity and variety of approaches and recordings, there is now growing recognition of the need to embed computational methods in cardiovascular medicine and science for analysis, integration and prediction. This paper describes a Workshop on Computational Cardiovascular Science that created an international, interdisciplinary and inter-sectorial forum to define the next steps for a human-based approach to disease supported by computational methodologies. The main ideas highlighted were (i) a shift towards human-based methodologies, spurred by advances in new in silico, in vivo, in vitro, and ex vivo techniques and the increasing acknowledgement of the limitations of animal models. (ii) Computational approaches complement, expand, bridge, and integrate in vitro, in vivo, and ex vivo experimental and clinical data and methods, and as such they are an integral part of human-based methodologies in pharmacology and medicine. (iii) The effective implementation of multi- and interdisciplinary approaches, teams, and training combining and integrating computational methods with experimental and clinical approaches across academia, industry, and healthcare settings is a priority. (iv) The human-based cross-disciplinary approach requires experts in specific methodologies and domains, who also have the capacity to communicate and collaborate across disciplines and cross-sector environments. (v) This new translational domain for human-based cardiology and pharmacology requires new partnerships supported financially and institutionally across sectors. Institutional, organizational, and social barriers must be identified, understood and overcome in each specific setting

    Improving the safety of atrial fibrillation monitoring systems through human verification

    Get PDF
    In this paper we propose a hybrid decision-making process for medical diagnosis. The hypothesis tested is that a deep learning system can provide real-time monitoring of Atrial Fibrillation (AF), a prevalent heart arrhythmia, and a human cardiologist will then verify the results and reach a diagnosis. The verification step adds the necessary checks and balances to increase the safety of the computer-based diagnostic process. In order to test hybrid-decision making, we created a prototype AF monitoring service. The service is based on Heart Rate (HR) sensors for signal acquisition as well as Internet of Things (IoT) technology for data communication and storage. These technologies enable transfer of HR data from patient to central cloud server. A deep learning system is used to analyze the data, which is then presented to a cardiologist when a dangerous condition is detected. This human specialist then works to verify the deep learning results based on the HR data and additional knowledge obtained through patient records or by personal interaction with the patient. A prerequisite for safety in any computer expert system is the clarity of purpose for the decision-making process. Health-care providers are considered customers who register patients with the AF monitoring service. The service delivers real-time diagnostic support by providing timely alarm messages and HR analysis. The safety critical decision then lies with the human practitioner
    corecore