383 research outputs found

    Frequency shifting approach towards textual transcription of heartbeat sounds

    Get PDF
    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription

    Extraction and Assessment of Diagnosis-Relevant Features for Heart Murmur Classification [post-print]

    Get PDF
    This paper presents a heart murmur detection and multi-class classification approach via machine learning. We extracted heart sound and murmur features that are of diagnostic importance and developed additional 16 features that are not perceivable by human ears but are valuable to improve murmur classification accuracy. We examined and compared the classification performance of supervised machine learning with k-nearest neighbor (KNN) and support vector machine (SVM) algorithms. We put together a test repertoire having more than 450 heart sound and murmur episodes to evaluate the performance of murmur classification using cross-validation of 80–20 and 90–10 splits. As clearly demonstrated in our evaluation, the specific set of features chosen in our study resulted in accurate classification consistently exceeding 90% for both classifiers

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    DIGITAL ANALYSIS OF CARDIAC ACOUSTIC SIGNALS IN CHILDREN

    Get PDF
    DIGITAL ANALYSIS OF CARDIAC ACOUSTIC SIGNALS IN CHILDREN Milad El-Segaier, MD Division of Paediatric Cardiology, Department of Paediatrics, Lund University Hospital, Lund, Sweden SUMMARY Despite tremendous development in cardiac imaging, use of the stethoscope and cardiac auscultation remains the primary diagnostic tool in evaluation of cardiac pathology. With the advent of miniaturized and powerful technology for data acquisition, display and digital signal processing, the possibilities for detecting cardiac pathology by signal analysis have increased. The objective of this study was to develop a simple, cost-effective diagnostic tool for analysis of cardiac acoustic signals. Heart sounds and murmurs were recorded in 360 children with a single-channel device and in 15 children with a multiple-channel device. Time intervals between acoustic signals were measured. Short-time Fourier transform (STFT) analysis was used to present the acoustic signals to a digital algorithm for detection of heart sounds, define systole and diastole and analyse the spectrum of a cardiac murmur. A statistical model for distinguishing physiological murmurs from pathological findings was developed using logistic regression analysis. The receiver operating characteristic (ROC) curve was used to evaluate the discriminating ability of the developed model. The sensitivities and specificities of the model were calculated at different cut-off points. Signal deconvolution using blind source separation (BSS) analysis was performed for separation of signals from different sources. The first and second heart sounds (S1 and S2) were detected with high accuracy (100% for the S1 and 97% for the S2) independently of heart rates and presence of a murmur. The systole and diastole were defined, but only systolic murmur was analysed in this work. The developed statistical model showed excellent prediction ability (area under the curve, AUC = 0.995) in distinguishing a physiological murmur from a pathological one with high sensitivity and specificity (98%). In further analyses deconvolution of the signals was successfully performed using blind separation analysis. This yielded two spatially independent sources, heart sounds (S1 and S2) in one component, and a murmur in another. The study supports the view that a cost-effective diagnostic device would be useful in primary health care. It would diminish the need for referring children with cardiac murmur to cardiac specialists and the load on the health care system. Likewise, it would help to minimize the psychological stress experienced by the children and their parents at an early stage of the medical care

    Automatic Heart Sounds Segmentation based on the Correlation Coefficients Matrix for Similar Cardiac Cycles Identification

    Get PDF
    This paper proposes a novel automatic heart sounds segmentation method for deployment in heart valve defect diagnosis. The method is based on the correlation coefficients matrix, calculated between all the heart cycles for similarity identification. Firstly, fundamental heart sounds (S1 and S2) in the presence of extra gallop sounds such as S3 and/or S4 and murmurs are localized with more accuracy. Secondly, two similarity-based filtering approaches (using time and time-frequency domains, respectively) for correlated heart cycles identification are proposed and evaluated in the context of professional clinical auscultated heart sounds of adult patients. Results show the superiority of the novel time-frequency method proposed here particularly in the presence of extra gallop sounds

    Synthesis of normal and abnormal heart sounds using Generative Adversarial Networks

    Get PDF
    En esta tesis doctoral se presentan diferentes métodos propuestos para el análisis y síntesis de sonidos cardíacos normales y anormales, logrando los siguientes aportes al estado del arte: i) Se implementó un algoritmo basado en la transformada wavelet empírica (EWT) y la energía promedio normalizada de Shannon (NASE) para mejorar la etapa de segmentación automática de los sonidos cardíacos; ii) Se implementaron diferentes técnicas de extracción de características para las señales cardíacas utilizando los coeficientes cepstrales de frecuencia Mel (MFCC), los coeficientes de predicción lineal (LPC) y los valores de potencia. Además, se probaron varios modelos de Machine Learning para la clasificación automática de sonidos cardíacos normales y anormales; iii) Se diseñó un modelo basado en Redes Adversarias Generativas (GAN) para generar sonidos cardíacos sintéticos normales. Además, se implementa un algoritmo de eliminación de ruido utilizando EWT, lo que permite una disminución en la cantidad de épocas y el costo computacional que requiere el modelo GAN; iv) Finalmente, se propone un modelo basado en la arquitectura GAN, que consiste en refinar señales cardíacas sintéticas obtenidas por un modelo matemático con características de señales cardíacas reales. Este modelo se ha denominado FeaturesGAN y no requiere una gran base de datos para generar diferentes tipos de sonidos cardíacos. Cada uno de estos aportes fueron validados con diferentes métodos objetivos y comparados con trabajos publicados en el estado del arte, obteniendo resultados favorables.DoctoradoDoctor en Ingeniería Eléctrica y Electrónic

    An open access database for the evaluation of heart sound algorithms

    Full text link
    This is an author-created, un-copyedited version of an article published in Physiological Measurement. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online at https://doi.org/10.1088/0967-3334/37/12/2181In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.This work was supported by the National Institutes of Health (NIH) grant R01-EB001659 from the National Institute of Biomedical Imaging and Bioengineering (NIBIB) and R01GM104987 from the National Institute of General Medical Sciences.Liu, C.; Springer, DC.; Li, Q.; Moody, B.; Abad Juan, RC.; Li, Q.; Moody, B.... (2016). An open access database for the evaluation of heart sound algorithms. Physiological Measurement. 37(12):2181-2213. doi:10.1088/0967-3334/37/12/2181S21812213371

    Deep Neural Networks for the Recognition and Classification of Heart Murmurs Using Neuromorphic Auditory Sensors

    Get PDF
    Auscultation is one of the most used techniques for detecting cardiovascular diseases, which is one of the main causes of death in the world. Heart murmurs are the most common abnormal finding when a patient visits the physician for auscultation. These heart sounds can either be innocent, which are harmless, or abnormal, which may be a sign of a more serious heart condition. However, the accuracy rate of primary care physicians and expert cardiologists when auscultating is not good enough to avoid most of both type-I (healthy patients are sent for echocardiogram) and type-II (pathological patients are sent home without medication or treatment) errors made. In this paper, the authors present a novel convolutional neural network based tool for classifying between healthy people and pathological patients using a neuromorphic auditory sensor for FPGA that is able to decompose the audio into frequency bands in real time. For this purpose, different networks have been trained with the heart murmur information contained in heart sound recordings obtained from nine different heart sound databases sourced from multiple research groups. These samples are segmented and preprocessed using the neuromorphic auditory sensor to decompose their audio information into frequency bands and, after that, sonogram images with the same size are generated. These images have been used to train and test different convolutional neural network architectures. The best results have been obtained with a modified version of the AlexNet model, achieving 97% accuracy (specificity: 95.12%, sensitivity: 93.20%, PhysioNet/CinC Challenge 2016 score: 0.9416). This tool could aid cardiologists and primary care physicians in the auscultation process, improving the decision making task and reducing type-I and type-II errors.Ministerio de Economía y Competitividad TEC2016-77785-
    corecore