45 research outputs found

    Evaluation of Pre-Trained CNN Models for Cardiovascular Disease Classification: A Benchmark Study

    Get PDF
    In this paper, we present an up-to-date benchmarking of the most commonly used pre-trained CNN models using a merged set of three available public datasets to have a large enough sample range. From the 18th century up to the present day, cardiovascular diseases, which are considered among the most significant health risks globally, have been diagnosed by the auscultation of heart sounds using a stethoscope. This method is elusive, and a highly experienced physician is required to master it. Artificial intelligence and, subsequently, machine learning are being applied to equip modern medicine with powerful tools to improve medical diagnoses. Image and audio pre-trained convolution neural network (CNN) models have been used for classifying normal and abnormal heartbeats using phonocardiogram signals. We objectively benchmark more than two dozen image-pre-trained CNN models in addition to two of the most popular audio-based pre-trained CNN models: VGGish and YAMnet, which have been developed specifically for audio classification. The experimental results have shown that audio-based models are among the best- performing models. In particular, the VGGish model had the highest average validation accuracy and average true positive rate of 87% and 85%, respectively

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    An approach for automatic identification of fundamental and additional sounds from cardiac sounds recordings.

    Get PDF
    This paper presents an approach for automatic segmentation of cardiac events from non-invasive sounds recordings, without the need of having an auxiliary signal reference. In addition, methods are proposed to subsequently differentiate cardiac events which correspond to normal cardiac cycles, from those which are due to abnormal activity of the heart. The detection of abnormal sounds is based on a model built with parameters which are obtained following feature extraction from those segments that were previously identified as normal fundamental heart sounds. The proposed algorithm achieved a sensitivity of 91.79% and 89.23% for the identification of normal fundamental, S1 and S2 sounds, and a true positive (TP) rate of 81.48% for abnormal additional sounds. These results were obtained using the PASCAL Classifying Heart Sounds challenge (CHSC) database

    GENERATYWNY MODEL Z DEEP FAKE AUGUMENTATION DLA SYGNAŁÓW Z FONOKARDIOGRAMU ORAZ ELEKTROKARDIOGRAMU W STRUKTURACH LSGAN ORAZ CYCLE GAN

    Get PDF
    In order to diagnose a range of cardiac conditions, it is important to conduct an accurate evaluation of either phonocardiogram (PCG) and electrocardiogram (ECG) data. Artificial intelligence and machine learning-based computer-assisted diagnostics are becoming increasingly commonplace in modern medicine, assisting clinicians in making life-or-death decisions. The requirement for an enormous amount of information for training to establish the framework for a deep learning-based technique is an empirical challenge in the field of medicine. This increases the risk of personal information being misused. As a direct result of this issue, there has been an explosion in the study of methods for creating synthetic patient data. Researchers have attempted to generate synthetic ECG or PCG readings. To balance the dataset, ECG data were first created on the MIT-BIH arrhythmia database using LS GAN and Cycle GAN. Next, using VGGNet, studies were conducted to classify arrhythmias for the synthesized ECG signals. The synthesized signals performed well and resembled the original signal and the obtained precision of 91.20%, recall of 89.52% and an F1 score of 90.35%.W celu zdiagnozowania szeregu chorób serca, istotne jest przeprowadzenie dokładnej oceny danych z fonokardiogramu (PCG) i elektrokardiogram (EKG). Sztuczna inteligencja i diagnostyka wspomagana komputerowo, oparta na uczeniu maszynowym stają się coraz bardziej powszechne we współczesnej medycynie, pomagając klinicystom w podejmowaniu krytycznych decyzji. Z kolei, Wymóg ogromnej ilości informacji do trenowania, w celu ustalenia platformy (ang. framework) techniki, opartej na głębokim uczeniu stanowi empiryczne wyzwanie w obszarze medycyny. Zwiększa to ryzyko niewłaściwego wykorzystania danych osobowych. Bezpośrednim skutkiem tego problemu był gwałtowny rozwój badań nad metodami tworzenia syntetycznych danych pacjentów. Badacze podjęli próbę wygenerowania syntetycznych odczytów diagramów EKG lub PCG. Stąd, w celu zrównoważenia zbioru danych, w pierwszej kolejności utworzono dane EKG w bazie danych arytmii MIT-BIH przy użyciu struktur sieci generatywnych LSGAN i Cycle GAN. Następnie, wykorzystując strukturę sieci VGGNet, przeprowadzono badania, mające na celu klasyfikację arytmii na potrzeby syntetyzowanych sygnałów EKG. Dla wygenerowanych sygnałów, przypominających sygnał oryginalny uzyskano dobre rezultaty. Należy podkreślić, że uzyskana dokładność wynosiła 91,20%, powtarzalność 89,52% i wynik F1 – odpowiednio 90,35%

    An audio processing pipeline for acquiring diagnostic quality heart sounds via mobile phone

    Get PDF
    Recently, heart sound signals captured using mobile phones have been employed to develop data-driven heart disease detection systems. Such signals are generally captured in person by trained clinicians who can determine if the recorded heart sounds are of diagnosable quality. However, mobile phones have the potential to support heart health diagnostics, even where access to trained medical professionals is limited. To adopt mobile phones as self-diagnostic tools for the masses, we would need to have a mechanism to automatically establish that heart sounds recorded by non-expert users in uncontrolled conditions have the required quality for diagnostic purposes. This paper proposes a quality assessment and enhancement pipeline for heart sounds captured using mobile phones. The pipeline analyzes a heart sound and determines if it has the required quality for diagnostic tasks. Also, in cases where the quality of the captured signal is below the required threshold, the pipeline can improve the quality by applying quality enhancement algorithms. Using this pipeline, we can also provide feedback to users regarding the cause of low-quality signal capture and guide them towards a successful one. We conducted a survey of a group of thirteen clinicians with auscultation skills and experience. The results of this survey were used to inform and validate the proposed quality assessment and enhancement pipeline. We observed a high level of agreement between the survey results and fundamental design decisions within the proposed pipeline. Also, the results indicate that the proposed pipeline can reduce our dependency on trained clinicians for capture of diagnosable heart sounds

    Classification of phonocardiograms with convolutional neural networks

    Get PDF
    The diagnosis of heart diseases from heart sounds is a matter of many years. This is the effect of having too many people with heart diseases in the world. Studies on heart sounds are usually based on classification for helping doctors. In other words, these studies are a substructure of clinical decision support systems. In this study, three different heart sound data in the PASCAL Btraining data set such as normal, murmur, and extrasystole are classified. Phonocardiograms which were obtained from heart sounds in the data set were used for classification. Both Artificial Neural Network (ANN) and Convolutional Neural Network (CNN) were used for classification to compare obtained results. In these studies, the obtained results show that the CNN classification gives the better result with 97.9% classification accuracy according to the results of ANN. Thus, CNN emerges as the ideal classification tool for the classification of heart sounds with variable characteristics

    A Comprehensive Survey on Heart Sound Analysis in the Deep Learning Era

    Full text link
    Heart sound auscultation has been demonstrated to be beneficial in clinical usage for early screening of cardiovascular diseases. Due to the high requirement of well-trained professionals for auscultation, automatic auscultation benefiting from signal processing and machine learning can help auxiliary diagnosis and reduce the burdens of training professional clinicians. Nevertheless, classic machine learning is limited to performance improvement in the era of big data. Deep learning has achieved better performance than classic machine learning in many research fields, as it employs more complex model architectures with stronger capability of extracting effective representations. Deep learning has been successfully applied to heart sound analysis in the past years. As most review works about heart sound analysis were given before 2017, the present survey is the first to work on a comprehensive overview to summarise papers on heart sound analysis with deep learning in the past six years 2017--2022. We introduce both classic machine learning and deep learning for comparison, and further offer insights about the advances and future research directions in deep learning for heart sound analysis

    Synthesis of normal and abnormal heart sounds using Generative Adversarial Networks

    Get PDF
    En esta tesis doctoral se presentan diferentes métodos propuestos para el análisis y síntesis de sonidos cardíacos normales y anormales, logrando los siguientes aportes al estado del arte: i) Se implementó un algoritmo basado en la transformada wavelet empírica (EWT) y la energía promedio normalizada de Shannon (NASE) para mejorar la etapa de segmentación automática de los sonidos cardíacos; ii) Se implementaron diferentes técnicas de extracción de características para las señales cardíacas utilizando los coeficientes cepstrales de frecuencia Mel (MFCC), los coeficientes de predicción lineal (LPC) y los valores de potencia. Además, se probaron varios modelos de Machine Learning para la clasificación automática de sonidos cardíacos normales y anormales; iii) Se diseñó un modelo basado en Redes Adversarias Generativas (GAN) para generar sonidos cardíacos sintéticos normales. Además, se implementa un algoritmo de eliminación de ruido utilizando EWT, lo que permite una disminución en la cantidad de épocas y el costo computacional que requiere el modelo GAN; iv) Finalmente, se propone un modelo basado en la arquitectura GAN, que consiste en refinar señales cardíacas sintéticas obtenidas por un modelo matemático con características de señales cardíacas reales. Este modelo se ha denominado FeaturesGAN y no requiere una gran base de datos para generar diferentes tipos de sonidos cardíacos. Cada uno de estos aportes fueron validados con diferentes métodos objetivos y comparados con trabajos publicados en el estado del arte, obteniendo resultados favorables.DoctoradoDoctor en Ingeniería Eléctrica y Electrónic

    NRC-Net: Automated noise robust cardio net for detecting valvular cardiac diseases using optimum transformation method with heart sound signals

    Full text link
    Cardiovascular diseases (CVDs) can be effectively treated when detected early, reducing mortality rates significantly. Traditionally, phonocardiogram (PCG) signals have been utilized for detecting cardiovascular disease due to their cost-effectiveness and simplicity. Nevertheless, various environmental and physiological noises frequently affect the PCG signals, compromising their essential distinctive characteristics. The prevalence of this issue in overcrowded and resource-constrained hospitals can compromise the accuracy of medical diagnoses. Therefore, this study aims to discover the optimal transformation method for detecting CVDs using noisy heart sound signals and propose a noise robust network to improve the CVDs classification performance.For the identification of the optimal transformation method for noisy heart sound data mel-frequency cepstral coefficients (MFCCs), short-time Fourier transform (STFT), constant-Q nonstationary Gabor transform (CQT) and continuous wavelet transform (CWT) has been used with VGG16. Furthermore, we propose a novel convolutional recurrent neural network (CRNN) architecture called noise robust cardio net (NRC-Net), which is a lightweight model to classify mitral regurgitation, aortic stenosis, mitral stenosis, mitral valve prolapse, and normal heart sounds using PCG signals contaminated with respiratory and random noises. An attention block is included to extract important temporal and spatial features from the noisy corrupted heart sound.The results of this study indicate that,CWT is the optimal transformation method for noisy heart sound signals. When evaluated on the GitHub heart sound dataset, CWT demonstrates an accuracy of 95.69% for VGG16, which is 1.95% better than the second-best CQT transformation technique. Moreover, our proposed NRC-Net with CWT obtained an accuracy of 97.4%, which is 1.71% higher than the VGG16
    corecore