5,582 research outputs found

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Dynamics of Snoring Sounds and Its Connection with Obstructive Sleep Apnea

    Get PDF
    Snoring is extremely common in the general population and when irregular may indicate the presence of obstructive sleep apnea. We analyze the overnight sequence of wave packets --- the snore sound --- recorded during full polysomnography in patients referred to the sleep laboratory due to suspected obstructive sleep apnea. We hypothesize that irregular snore, with duration in the range between 10 and 100 seconds, correlates with respiratory obstructive events. We find that the number of irregular snores --- easily accessible, and quantified by what we call the snore time interval index (STII) --- is in good agreement with the well-known apnea-hypopnea index, which expresses the severity of obstructive sleep apnea and is extracted only from polysomnography. In addition, the Hurst analysis of the snore sound itself, which calculates the fluctuations in the signal as a function of time interval, is used to build a classifier that is able to distinguish between patients with no or mild apnea and patients with moderate or severe apnea

    A Comprehensive Survey on Heart Sound Analysis in the Deep Learning Era

    Full text link
    Heart sound auscultation has been demonstrated to be beneficial in clinical usage for early screening of cardiovascular diseases. Due to the high requirement of well-trained professionals for auscultation, automatic auscultation benefiting from signal processing and machine learning can help auxiliary diagnosis and reduce the burdens of training professional clinicians. Nevertheless, classic machine learning is limited to performance improvement in the era of big data. Deep learning has achieved better performance than classic machine learning in many research fields, as it employs more complex model architectures with stronger capability of extracting effective representations. Deep learning has been successfully applied to heart sound analysis in the past years. As most review works about heart sound analysis were given before 2017, the present survey is the first to work on a comprehensive overview to summarise papers on heart sound analysis with deep learning in the past six years 2017--2022. We introduce both classic machine learning and deep learning for comparison, and further offer insights about the advances and future research directions in deep learning for heart sound analysis

    Artificial intelligence and automation in valvular heart diseases

    Get PDF
    Artificial intelligence (AI) is gradually changing every aspect of social life, and healthcare is no exception. The clinical procedures that were supposed to, and could previously only be handled by human experts can now be carried out by machines in a more accurate and efficient way. The coming era of big data and the advent of supercomputers provides great opportunities to the development of AI technology for the enhancement of diagnosis and clinical decision-making. This review provides an introduction to AI and highlights its applications in the clinical flow of diagnosing and treating valvular heart diseases (VHDs). More specifically, this review first introduces some key concepts and subareas in AI. Secondly, it discusses the application of AI in heart sound auscultation and medical image analysis for assistance in diagnosing VHDs. Thirdly, it introduces using AI algorithms to identify risk factors and predict mortality of cardiac surgery. This review also describes the state-of-the-art autonomous surgical robots and their roles in cardiac surgery and intervention

    Diagnostic accuracy of machine learning models to identify congenital heart disease: A meta-analysis

    Get PDF
    Background: With the dearth of trained care providers to diagnose congenital heart disease (CHD) and a surge in machine learning (ML) models, this review aims to estimate the diagnostic accuracy of such models for detecting CHD. Methods: A comprehensive literature search in the PubMed, CINAHL, Wiley Cochrane Library, and Web of Science databases was performed. Studies that reported the diagnostic ability of ML for the detection of CHD compared to the reference standard were included. Risk of bias assessment was performed using Quality Assessment for Diagnostic Accuracy Studies-2 tool. The sensitivity and specificity results from the studies were used to generate the hierarchical Summary ROC (HSROC) curve. Results: We included 16 studies (1217 participants) that used ML algorithm to diagnose CHD. Neural networks were used in seven studies with overall sensitivity of 90.9% (95% CI 85.2-94.5%) and specificity was 92.7% (95% CI 86.4-96.2%). Other ML models included ensemble methods, deep learning and clustering techniques but did not have sufficient number of studies for a meta-analysis. Majority (n=11, 69%) of studies had a high risk of patient selection bias, unclear bias on index test (n=9, 56%) and flow and timing (n=12, 75%) while low risk of bias was reported for the reference standard (n=10, 62%). Conclusion: ML models such as neural networks have the potential to diagnose CHD accurately without the need for trained personnel. The heterogeneity of the diagnostic modalities used to train these models and the heterogeneity of the CHD diagnoses included between the studies is a major limitation

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    A Novel Soft Computing Based Model For Symptom Analysis & Disease Classification

    Get PDF
    In countries like India, many mortality occurs every year because of improper pronouncement of disease on time. Many people remain deprived of medication as the people per doctor ratio are nearly 1:1700. Every human body and its physiological processes show some symptoms of a diseased condition. The proposed model in this paper would analyze those symptoms for identification of the disease and its type. In this proposed model, few selected attributes would be considered which are shown as symptoms by a person suspected with a particular disease. Those attributes can be taken as input for the proposed symptom analysis and classification model, which is a soft computing model for classifying a sample first to be diseased or disease free and then, if diseased, predicting its type (if any). Number of diseased and disease free samples are to be collected. Each of these samples is a collection of attributes shown / expressed by a human body. With respect to a specific disease, those collected samples form two primary clusters, one is diseased and the other one is disease free. The disease free cluster may be discarded for further analysis. Depending on the symptoms shown by the diseased samples, every disease has some types based on the symptoms it shows. The diseased cluster of samples can reform clusters among themselves depending on the types of the disease. Those clusters then become the classes of the multiclass classifier for analysis of a new incoming sample

    Respiratory Sound Analysis for the Evidence of Lung Health

    Get PDF
    Significant changes have been made on audio-based technologies over years in several different fields along with healthcare industry. Analysis of Lung sounds is a potential source of noninvasive, quantitative information along with additional objective on the status of the pulmonary system. To do that medical professionals listen to sounds heard over the chest wall at different positions with a stethoscope which is known as auscultation and is important in diagnosing respiratory diseases. At times, possibility of inaccurate interpretation of respiratory sounds happens because of clinician’s lack of considerable expertise or sometimes trainees such as interns and residents misidentify respiratory sounds. We have built a tool to distinguish healthy respiratory sound from non-healthy ones that come from respiratory infection carrying patients. The audio clips were characterized using Linear Predictive Cepstral Coefficient (LPCC)-based features and the highest possible accuracy of 99.22% was obtained with a Multi-Layer Perceptron (MLP)- based classifier on the publicly available ICBHI17 respiratory sounds dataset [1] of size 6800+ clips. The system also outperformed established works in literature and other machine learning techniques. In future we will try to use larger dataset with other acoustic techniques along with deep learning-based approaches and try to identify the nature and severity of infection using respiratory sounds

    Medical Diagnostic Systems Using Artificial Intelligence (AI) Algorithms : Principles and Perspectives

    Get PDF
    Funding Information: This work was supported in part by the National Research Foundation of Korea grant funded by the Korean Government, Ministry of Science and ICT, under Grant NRF-2020R1A2B5B02002478, and in part by Sejong University through its Faculty Research Program.Peer reviewe
    corecore