23 research outputs found

    An open access database for the evaluation of heart sound algorithms

    Full text link
    This is an author-created, un-copyedited version of an article published in Physiological Measurement. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online at https://doi.org/10.1088/0967-3334/37/12/2181In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.This work was supported by the National Institutes of Health (NIH) grant R01-EB001659 from the National Institute of Biomedical Imaging and Bioengineering (NIBIB) and R01GM104987 from the National Institute of General Medical Sciences.Liu, C.; Springer, DC.; Li, Q.; Moody, B.; Abad Juan, RC.; Li, Q.; Moody, B.... (2016). An open access database for the evaluation of heart sound algorithms. Physiological Measurement. 37(12):2181-2213. doi:10.1088/0967-3334/37/12/2181S21812213371

    Efficient method for events detection in phonocardiographic signals

    Get PDF
    The auscultation of the heart is still the first basic analysis tool used to evaluate the functional state of the heart, as well as the first indicator used to submit the patient to a cardiologist. In order to improve the diagnosis capabilities of auscultation, signal processing algorithms are currently being developed to assist the physician at primary care centers for adult and pediatric population. A basic task for the diagnosis from the phonocardiogram is to detect the events (main and additional sounds, murmurs and clicks) present in the cardiac cycle. This is usually made by applying a threshold and detecting the events that are bigger than the threshold. However, this method usually does not allow the detection of the main sounds when additional sounds and murmurs exist, or it may join several events into a unique one. In this paper we present a reliable method to detect the events present in the phonocardiogram, even in the presence of heart murmurs or additional sounds. The method detects relative maxima peaks in the amplitude envelope of the phonocardiogram, and computes a set of parameters associated with each event. Finally, a set of characteristics is extracted from each event to aid in the identification of the events. Besides, the morphology of the murmurs is also detected, which aids in the differentiation of different diseases that can occur in the same temporal localization. The algorithms have been applied to real normal heart sounds and murmurs, achieving satisfactory results.This work has been supported by Fundación Séneca of Región de Murcia and Ministerio de Ciencia y Tecnología of Spain, under grants PB/63/FS/02 and TIC2003-09400-C04-02, respectively

    A Comprehensive Survey on Heart Sound Analysis in the Deep Learning Era

    Full text link
    Heart sound auscultation has been demonstrated to be beneficial in clinical usage for early screening of cardiovascular diseases. Due to the high requirement of well-trained professionals for auscultation, automatic auscultation benefiting from signal processing and machine learning can help auxiliary diagnosis and reduce the burdens of training professional clinicians. Nevertheless, classic machine learning is limited to performance improvement in the era of big data. Deep learning has achieved better performance than classic machine learning in many research fields, as it employs more complex model architectures with stronger capability of extracting effective representations. Deep learning has been successfully applied to heart sound analysis in the past years. As most review works about heart sound analysis were given before 2017, the present survey is the first to work on a comprehensive overview to summarise papers on heart sound analysis with deep learning in the past six years 2017--2022. We introduce both classic machine learning and deep learning for comparison, and further offer insights about the advances and future research directions in deep learning for heart sound analysis

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    A Human-Machine Framework for the Classification of Phonocardiograms

    Get PDF
    In this thesis, we present and evaluate a framework for combining machine learning algo- rithms, crowd workers, and experts in the classification of heart sound recordings. The development of a hybrid human-machine framework for heart sound recordings is moti- vated by the past success in utilizing human computation to solve problems in medicine as well as the use of human-machine frameworks in other domains. We describe the methods that decide when and how to escalate the analysis of heart sound recordings to different resources and incorporate their decision into a final classification. We present and discuss the results of the framework which was tested with a number of different machine classi- fiers and a group of crowd workers from Amazon’s Mechanical Turk. We also provide an evaluation of how crowd workers perform in various different heart sound analysis tasks, and how they compare with machine classifiers. In addition, we investigate how machine and human analysis are effected by different types of heart sounds and provide a strategy for involving experts when these methods are uncertain. We conclude that the use of a hybrid framework is a viable method for heart sound classification

    Narrative review of the role of artificial intelligence to improve aortic valve disease management

    Get PDF
    Valvular heart disease (VHD) is a chronic progressive condition with an increasing prevalence in the Western world due to aging populations. VHD is often diagnosed at a late stage when patients are symptomatic and the outcomes of therapy, including valve replacement, may be sub-optimal due the development of secondary complications, including left ventricular (LV) dysfunction. The clinical application of artificial intelligence (AI), including machine learning (ML), has promise in supporting not only early and more timely diagnosis, but also hastening patient referral and ensuring optimal treatment of VHD. As physician auscultation lacks accuracy in diagnosis of significant VHD, computer-aided auscultation (CAA) with the help of a commercially available digital stethoscopes improves the detection and classification of heart murmurs. Although used little in current clinical practice, CAA can screen large populations at low cost with high accuracy for VHD and faciliate appropriate patient referral. Echocardiography remains the next step in assessment and planning management and AI is delivering major changes in speeding training, improving image quality by pattern recognition and image sorting, as well as automated measurement of multiple variables, thereby improving accuracy. Furthermore, AI then has the potential to hasten patient disposal, by automated alerts for red-flag findings, as well as decision support in dealing with results. In management, there is great potential in ML-enabled tools to support comprehensive disease monitoring and individualized treatment decisions. Using data from multiple sources, including demographic and clinical risk data to image variables and electronic reports from electronic medical records, specific patient phenotypes may be identified that are associated with greater risk or modeled to the estimate trajectory of VHD progression. Finally, AI algorithms are of proven value in planning intervention, facilitating transcatheter valve replacement by automated measurements of anatomical dimensions derived from imaging data to improve valve selection, valve size and method of delivery

    An audio processing pipeline for acquiring diagnostic quality heart sounds via mobile phone

    Get PDF
    Recently, heart sound signals captured using mobile phones have been employed to develop data-driven heart disease detection systems. Such signals are generally captured in person by trained clinicians who can determine if the recorded heart sounds are of diagnosable quality. However, mobile phones have the potential to support heart health diagnostics, even where access to trained medical professionals is limited. To adopt mobile phones as self-diagnostic tools for the masses, we would need to have a mechanism to automatically establish that heart sounds recorded by non-expert users in uncontrolled conditions have the required quality for diagnostic purposes. This paper proposes a quality assessment and enhancement pipeline for heart sounds captured using mobile phones. The pipeline analyzes a heart sound and determines if it has the required quality for diagnostic tasks. Also, in cases where the quality of the captured signal is below the required threshold, the pipeline can improve the quality by applying quality enhancement algorithms. Using this pipeline, we can also provide feedback to users regarding the cause of low-quality signal capture and guide them towards a successful one. We conducted a survey of a group of thirteen clinicians with auscultation skills and experience. The results of this survey were used to inform and validate the proposed quality assessment and enhancement pipeline. We observed a high level of agreement between the survey results and fundamental design decisions within the proposed pipeline. Also, the results indicate that the proposed pipeline can reduce our dependency on trained clinicians for capture of diagnosable heart sounds
    corecore