9 research outputs found

    Classification of phonocardiograms with convolutional neural networks

    Get PDF
    The diagnosis of heart diseases from heart sounds is a matter of many years. This is the effect of having too many people with heart diseases in the world. Studies on heart sounds are usually based on classification for helping doctors. In other words, these studies are a substructure of clinical decision support systems. In this study, three different heart sound data in the PASCAL Btraining data set such as normal, murmur, and extrasystole are classified. Phonocardiograms which were obtained from heart sounds in the data set were used for classification. Both Artificial Neural Network (ANN) and Convolutional Neural Network (CNN) were used for classification to compare obtained results. In these studies, the obtained results show that the CNN classification gives the better result with 97.9% classification accuracy according to the results of ANN. Thus, CNN emerges as the ideal classification tool for the classification of heart sounds with variable characteristics

    The Usage of Data Augmentation Strategies on the Detection of Murmur Waves in a Pcg Signal

    Get PDF
    Cardiac auscultation is a key screening tool used for cardiovascular evaluation. When used properly, it speeds up treatment and thus improving the patient’s life quality. However, the analysis and interpretation of the heart sound signals is subjective and dependent of the physician’s experience and domain knowledge. A computer assistant decision (CAD) system that automatically analyse heart sound signals, can not only support physicians in their clinical decisions but also release human resources to other tasks. In this paper, and to the best of our knowledge, for the first time a SMOTE strategy is used to boost a Convolutional Neural Network performance on the detection of murmur waves. Using the SMOTE strategy, a CNN achieved an overall of 88.43%.info:eu-repo/semantics/publishedVersio

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    Evaluation of Pre-Trained CNN Models for Cardiovascular Disease Classification: A Benchmark Study

    Get PDF
    In this paper, we present an up-to-date benchmarking of the most commonly used pre-trained CNN models using a merged set of three available public datasets to have a large enough sample range. From the 18th century up to the present day, cardiovascular diseases, which are considered among the most significant health risks globally, have been diagnosed by the auscultation of heart sounds using a stethoscope. This method is elusive, and a highly experienced physician is required to master it. Artificial intelligence and, subsequently, machine learning are being applied to equip modern medicine with powerful tools to improve medical diagnoses. Image and audio pre-trained convolution neural network (CNN) models have been used for classifying normal and abnormal heartbeats using phonocardiogram signals. We objectively benchmark more than two dozen image-pre-trained CNN models in addition to two of the most popular audio-based pre-trained CNN models: VGGish and YAMnet, which have been developed specifically for audio classification. The experimental results have shown that audio-based models are among the best- performing models. In particular, the VGGish model had the highest average validation accuracy and average true positive rate of 87% and 85%, respectively

    Classificação de sons urbanos usando motifs e MFCC

    Get PDF
    A classificação automática de sons urbanos é importante para o monitoramento ambiental. Este trabalho apresenta uma nova metodologia para classificar sons urbanos, que se baseia na descoberta de padrões frequentes (motifs) nos sinais sonoros e utiliza-los como atributos para a classificação. Para extrair os motifs é utilizado um método de descoberta multi-resolução baseada em SAX. Para a classificação são usadas árvores de decisão e SVMs. Esta nova metodologia é comparada com outra bastante utilizada baseada em MFCC. Para a realização de experiências foi utilizado o dataset UrbanSound disponível publicamente. Realizadas as experiências, foi possível concluir que os atributos motif são melhores que os MFCC a discriminar sons com timbres semelhantes e que os melhores resultados são conseguidos com ambos os tipos de atributos combinados. Neste trabalho foi também desenvolvida uma aplicação móvel para Android que permite utilizar os métodos de classificação desenvolvidos num contexto de vida real e expandir o dataset.The automatic classification of urban sounds is important for environmental monitoring. This work presents a new method to classify urban sounds based on frequent patterns (motifs) in the audio signals and using them as classification attributes. To extract the motifs, a multiresolution discovery based on SAX is used. For the classification itself, decision trees and SVMs are used. This new method is compared with another largely used based on MFCCs. For the experiments, the publicly available UrbanSound dataset was used. After the experiments, it was concluded that motif attributes are better to discriminate sounds with similar timbre and better results are achieved with both attribute types combined. In this work was also developed a mobile application for Android which allows the use of the developed classifications methods in a real life context and to expand the dataset

    Classification of Extrasystole Heart Sounds with MFCC Features by using Artificial Neural Network

    No full text
    25th Signal Processing and Communications Applications Conference, SIU 2017 -- 15 May 2017 through 18 May 2017 -- -- 128703In this study, classification of Normal and Extra systolic heart sounds (HS) have been carried out using in PASCAL Heart Sounds (HS) data base. The extrasystole is the HS that is produced by performing an extra beat in each heart cycle, unlike the heartbeat normal cycle. It can be felt by people as palpitations. Occurrence of these sounds in certain age groups may be the indication of tachycardia. In this study, firstly HS have been normalized at first. Then an elliptic filter has used for noise reduction. HS features have been obtained using Mel-Frequency Cepstrum Coefficients. These features have classified using Artificial Neural Network. In this study, 45 extra systoles heart sounds have used. 30 of them have been used as training data for classification while remaining 15 ones have been used for the test. Certainty, sensitivity, accuracy values have been calculated using confusion matrix. Classification success has been calculated as 90%. © 2017 IEEE

    Classification of Extrasystole Heart Sounds with MFCC Features by using Artificial Neural Network

    No full text
    25th Signal Processing and Communications Applications Conference (SIU) -- MAY 15-18, 2017 -- Antalya, TURKEYWOS: 000413813100116In this study,, classification of Normal and Extra systolic heart sounds (HS) have been carried out using in PASCAL Heart Sounds (HS) data base. The extrasystole is the HS that is produced by performing an extra beat in each heart cycle, unlike the heartbeat normal cycle. It can be felt by people as palpitations. Occurrence of these sounds in certain age groups may be the indication of tachycardia. In this study, firstly HS have been normalized at first. Then an elliptic filter has used for noise reduction. HS features have been obtained using Mel- Frequency Cepstrum Coefficients. These features have classified using Artificial Neural Network. In this study, 45 extra systoles heart sounds have used. 30 of them have been used as training data for classification while remaining 15 ones have been used for the test. Certainty, sensitivity, accuracy values have been calculated using confusion matrix. Classification success has been calculated as 90%.Turk Telekom, Arcelik A S, Aselsan, ARGENIT, HAVELSAN, NETAS, Adresgezgini, IEEE Turkey Sect, AVCR Informat Technologies, Cisco, i2i Syst, Integrated Syst & Syst Design, ENOVAS, FiGES Engn, MS Spektral, Istanbul Teknik Uni
    corecore