2,040 research outputs found

    An open access database for the evaluation of heart sound algorithms

    Full text link
    This is an author-created, un-copyedited version of an article published in Physiological Measurement. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online at https://doi.org/10.1088/0967-3334/37/12/2181In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.This work was supported by the National Institutes of Health (NIH) grant R01-EB001659 from the National Institute of Biomedical Imaging and Bioengineering (NIBIB) and R01GM104987 from the National Institute of General Medical Sciences.Liu, C.; Springer, DC.; Li, Q.; Moody, B.; Abad Juan, RC.; Li, Q.; Moody, B.... (2016). An open access database for the evaluation of heart sound algorithms. Physiological Measurement. 37(12):2181-2213. doi:10.1088/0967-3334/37/12/2181S21812213371

    Phonocardiogram segmentation by using Hidden Markov Models

    Get PDF
    This paper is concerned to the segmentation of heart sounds by using state of art Hidden Markov Models technology. Concerning to several heart pathologies the analysis of the intervals between the first and second heart sounds is of utmost importance. Such intervals are silent for a normal subject and the presence of murmurs indicate certain cardiovascular defects and diseases. While the first heart sound can easily be detected if the ECG is available, the second heart sound is much more difficult to be detected given the low amplitude and smoothness of the T-wave. In the scope of this segmentation difficulty the well known non-stationary statistical properties of Hidden Markov Models concerned to temporal signal segmentation capabilities can be adequate to deal with this kind of segmentation problems. The feature vectors are based on a MFCC based representation obtained from a spectral normalisation procedure, which showed better performance than the MFCC representation alone in an Isolated Speech Recognition framework. Experimental results were evaluated on data collected from five different subjects, using CardioLab system and a Dash family patient monitor. The ECG leads I, II and III and an electronic stethoscope signal were sampled at 977 samples per second

    Phonocardiogram segmentation by using an hybrid RBF-HMM model

    Get PDF
    This paper is concerned to the segmentation of heart sounds by using Radial-Basis Functions for acoustical modelling, combined with a Hidden Markov Model for heart sounds sequence modelling. The idea behind the use of RBF’s is to take advantage of the local approximations using exponentially decaying localized nonlinearities achieved by the Gaussian function, which increases the clustering power relatively to MLP’s. This neural model can be advantageous over the global approximations to nonlinear input-output mappings provided by Multilayer Perceptrons (MLP’s), especially when non-stationary processes need to be accurately modelled. The above described RBF’s properties combined with the non-stationary statistical properties of Hidden Markov Models can help in the detection of the T-wave which is fundamental for the detection of the second heart sound. The feature vectors are based on a MFCC based representation obtained from a spectral normalisation procedure, which showed better performance than the MFCC representation alone, in an Isolated Speech Recognition framework. Experimental results were evaluated on data collected from five different subjects, using CardioLab system and a Dash family patient monitor. The ECG leads I, II and III and an electronic stethoscope signal were sampled at 977 samples per second

    A Comprehensive Survey on Heart Sound Analysis in the Deep Learning Era

    Full text link
    Heart sound auscultation has been demonstrated to be beneficial in clinical usage for early screening of cardiovascular diseases. Due to the high requirement of well-trained professionals for auscultation, automatic auscultation benefiting from signal processing and machine learning can help auxiliary diagnosis and reduce the burdens of training professional clinicians. Nevertheless, classic machine learning is limited to performance improvement in the era of big data. Deep learning has achieved better performance than classic machine learning in many research fields, as it employs more complex model architectures with stronger capability of extracting effective representations. Deep learning has been successfully applied to heart sound analysis in the past years. As most review works about heart sound analysis were given before 2017, the present survey is the first to work on a comprehensive overview to summarise papers on heart sound analysis with deep learning in the past six years 2017--2022. We introduce both classic machine learning and deep learning for comparison, and further offer insights about the advances and future research directions in deep learning for heart sound analysis

    Automatic segmentation of the second cardiac sound by using wavelets and hidden Markov models

    Get PDF
    This paper is concerned with the segmentation of the second heart sound (S2) of the phonocardiogram (PCG), in its two acoustic events, aortic (A2) and pulmonary (P2) components. The aortic valve (A2) usually closes before the pulmonary valve (P2) and the delay between these two events is known as “split” and is typically less than 30 miliseconds. S2 splitting, reverse splitting or reverse occurrence of components A2 and P2 are the most important aspects regarding cardiac diagnosis carried out by the analysis of S2 cardiac sound. An automatic technique, based on discrete wavelet transform and hidden Markov models, is proposed in this paper to segment S2, to estimate de order of occurrence of A2 and P2 and finally to estimate the delay between these two components (split). A discrete density hidden Markov model (DDHMM) is used for phonocardiogram segmentation while embedded continuous density hidden Markov models are used for acoustic models, which allows segmenting S2. Experimental results were evaluated on data collected from five different subjects, using CardioLab system and a Dash family patient monitor. The ECG leads I, II and III and an electronic stethoscope signal were sampled at 977 samples per second.Centre Algoritm
    • …
    corecore