2,765 research outputs found

    An approach for automatic identification of fundamental and additional sounds from cardiac sounds recordings.

    Get PDF
    This paper presents an approach for automatic segmentation of cardiac events from non-invasive sounds recordings, without the need of having an auxiliary signal reference. In addition, methods are proposed to subsequently differentiate cardiac events which correspond to normal cardiac cycles, from those which are due to abnormal activity of the heart. The detection of abnormal sounds is based on a model built with parameters which are obtained following feature extraction from those segments that were previously identified as normal fundamental heart sounds. The proposed algorithm achieved a sensitivity of 91.79% and 89.23% for the identification of normal fundamental, S1 and S2 sounds, and a true positive (TP) rate of 81.48% for abnormal additional sounds. These results were obtained using the PASCAL Classifying Heart Sounds challenge (CHSC) database

    Estimation of Surrogate Respiration and Detection of Sleep Apnea Events from Dynamic Data Mining of Multiple Cardiorespiratory Sensors

    Get PDF
    This research investigates an approach to derive respiration waveform from heart sound signals, and compare the waveform signal obtained thus with those obtained from alternative methods for deriving respiration waveforms from measured ECG signals. The investigations indicate that HSR can lead to a cost effective alternative to the use of respiratory vests to analyze cardiorespiratory dynamics for clinical diagnostics and wellness assessments. The derived respiratory rate was further used to classify Type III sleep apnea periods using recurrence analysis. Detection of patterns causing sleep apnea could open up opportunities to researchers to better understand and predict symptoms leading to disorders linked with sleep apnea like hypertension, sudden infant death syndrome, high blood pressure and a risk of heart attack. Surrogate respiratory signals derived from heart sounds (HSR) are found to have 32% and 36% correlation with the actual respiratory signals recorded at upright and supine positions, respectively, as compared to EMD derived respiration signals (EDR) that have (18% and 26%) correlation with the respiration waveforms measured in upright and supine positions, respectively. Wavelet-derived respiration (WDR) signals show a higher wave-to-wave correlation (55% and 55%) than HSR and EDR waveforms, but the respiratory sinus arrhythmia (RSA), zero crossing intervals, and respiratory rates of the HSR correlate better with the measured values, compared with those from EDR and WDR signals. Three models were implemented using recurrence analysis to classify sleep apnea events and were compared with a vectorized time series derived model. Advanced predictive modeling tools like decision trees, neural networks and regression models were used to classify sleep apnea events form non-apneic events. Model comparison within preliminary analysis model consisting of nasal respiration as well as its time lagged components and heart rate when compared with recurrence models shows that the preliminary analysis model(vectorized time series) has a lower misclassification rate (10%) than the recurrence models( Model 1: 20% Model 2: 14%, Model 3: 12%).Industrial Engineering & Managemen

    Time-shared channel identification for adaptive noise cancellation in breath sound extraction

    Get PDF
    Abstract: Noise artifacts are one of the key obstacles in applying continuous monitoring and conrputer-assisted analysis of lung sounds. Traditional adaptive noise cancellation (ANC) methodologies work reasonably well when signal and noise are stationary and independent. Clinical lung sound auscultation encounters an acoustic environment in which breath sounds are not stationary and often correlate with noise. Consequently, capability of ANC becomes significantly compromised. This paper introduces a new methodology for extracting authentic lung sounds from noise-corrupted measurements. Unlike traditional noise cancellation methods that rely on either frequency band separation or sig3M/noise independence to achieve noise reduction, this methodology combines the traditional noise canceling n{ethods with the unique feature of time-split stages in breathing sounds. By employing a multi-sensor system, the method first employs a high-pass filter to elhninate the off-hand noise, and then performs time-shared blind identification and noise cancellation with recursion from breathing cycle to cycle. Since no frequency separation or signal/noise independence is required, this method potentially has a robust and reliable capability of noise reduction, complementing the traditional methods

    Analysis of the structure of time-frequency information in electromagnetic brain signals

    Get PDF
    This thesis encompasses methodological developments and experimental work aimed at revealing information contained in time, frequency, and time–frequency representations of electromagnetic, specifically magnetoencephalographic, brain signals. The work can be divided into six endeavors. First, it was shown that sound slopes increasing in intensity from undetectable to audible elicit event-related responses (ERRs) that predict behavioral sound detection. This provides an opportunity to use non-invasive brain measures in hearing assessment. Second, the actively debated generation mechanism of ERRs was examined using novel analysis techniques, which showed that auditory stimulation did not result in phase reorganization of ongoing neural oscillations, and that processes additive to the oscillations accounted for the generation of ERRs. Third, the prerequisites for the use of continuous wavelet transform in the interrogation of event-related brain processes were established. Subsequently, it was found that auditory stimulation resulted in an intermittent dampening of ongoing oscillations. Fourth, information on the time–frequency structure of ERRs was used to reveal that, depending on measurement condition, amplitude differences in averaged ERRs were due to changes in temporal alignment or in amplitudes of the single-trial ERRs. Fifth, a method that exploits mutual information of spectral estimates obtained with several window lengths was introduced. It allows the removal of frequency-dependent noise slopes and the accentuation of spectral peaks. Finally, a two-dimensional statistical data representation was developed, wherein all frequency components of a signal are made directly comparable according to spectral distribution of their envelope modulations by using the fractal property of the wavelet transform. This representation reveals noise buried processes and describes their envelope behavior. These examinations provide for two general conjectures. The stability of structures, or the level of stationarity, in a signal determines the appropriate analysis method and can be used as a measure to reveal processes that may not be observable with other available analysis approaches. The results also indicate that transient neural activity, reflected in ERRs, is a viable means of representing information in the human brain.reviewe

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    Aerospace Medicine and Biology: A continuing bibliography with indexes (supplement 153)

    Get PDF
    This bibliography lists 175 reports, articles, and other documents introduced into the NASA scientific and technical information system in March 1976

    An audio processing pipeline for acquiring diagnostic quality heart sounds via mobile phone

    Get PDF
    Recently, heart sound signals captured using mobile phones have been employed to develop data-driven heart disease detection systems. Such signals are generally captured in person by trained clinicians who can determine if the recorded heart sounds are of diagnosable quality. However, mobile phones have the potential to support heart health diagnostics, even where access to trained medical professionals is limited. To adopt mobile phones as self-diagnostic tools for the masses, we would need to have a mechanism to automatically establish that heart sounds recorded by non-expert users in uncontrolled conditions have the required quality for diagnostic purposes. This paper proposes a quality assessment and enhancement pipeline for heart sounds captured using mobile phones. The pipeline analyzes a heart sound and determines if it has the required quality for diagnostic tasks. Also, in cases where the quality of the captured signal is below the required threshold, the pipeline can improve the quality by applying quality enhancement algorithms. Using this pipeline, we can also provide feedback to users regarding the cause of low-quality signal capture and guide them towards a successful one. We conducted a survey of a group of thirteen clinicians with auscultation skills and experience. The results of this survey were used to inform and validate the proposed quality assessment and enhancement pipeline. We observed a high level of agreement between the survey results and fundamental design decisions within the proposed pipeline. Also, the results indicate that the proposed pipeline can reduce our dependency on trained clinicians for capture of diagnosable heart sounds

    Analysis of the second heart sound for measurement of split

    Get PDF
    A2 and P2 are the two components of the second heart sound S2. A2 is the sound produced by the closing of the aortic valve and P2 is due to the closing of the pulmonary valve. Usually the pulmonary valve closes after the aortic valve. The closure of the two valves introduces a time delay. This delay is known as split. Discrete wavelet transform (DWT) and continuous wavelet transform (CWT) are used to measure the split between the two components of A2 and P2 of second heart sound of normal and pathological case of Phonocardiogram (PCG) signal. To recognize the split, A2 and P2 are identified and delay between the A2 and P2 are estimated. DWT is used to find the split between the two components and continuous wavelet transform is used to find the number of frequency components of S1 and S2. Split between the two components is measured by the continuous wavelet transform. Also normalized split Zn can be calculated. If Zn is less than one, the split is the normal otherwise, it is pathological split

    A Comparison of Wavelet and Simplicity-Based Heart Sound and Murmur Segmentation Methods

    Get PDF
    Stethoscopes are the most commonly used medical devices for diagnosing heart conditions because they are inexpensive, noninvasive, and light enough to be carried around by a clinician. Auscultation with a stethoscope requires considerable skill and experience, but the introduction of digital stethoscopes allows for the automation of this task. Auscultation waveform segmentation, which is the process of determining the boundaries of heart sound and murmur segments, is the primary challenge in automating the diagnosis of various heart conditions. The purpose of this thesis is to improve the accuracy and efficiency of established techniques for detecting, segmenting, and classifying heart sounds and murmurs in digitized phonocardiogram audio files. Two separate segmentation techniques based on the discrete wavelet transform (DWT) and the simplicity transform are integrated into a MATLAB software system that is capable of automatically detecting and classifying sound segments. The performance of the two segmentation methods for recognizing normal heart sounds and several different heart murmurs is compared by quantifying the results with clinical and technical metrics. The two clinical metrics are the false negative detection rate (FNDR) and the false positive detection rate (FPDR), which count heart cycles rather than sound segments. The wavelet and simplicity methods have a 4% and 9% respective FNDR, so it is unlikely that either method would not detect a heart condition. However, the 22% and 0% respective FPDR signifies that the wavelet method is likely to detect false heart conditions, while the simplicity method is not. The two technical metrics are the true murmur detection rate (TMDR) and the false murmur detection rate (FMDR), which count sound segments rather than heart cycles. Both methods are equally likely to detect true murmurs given their 83% TMDR. However, the 13% and 0% respective FMDR implies that the wavelet method is susceptible to detecting false murmurs, while the simplicity method is not. Simplicity-based segmentation, therefore, demonstrates superior performance to wavelet-based segmentation, as both are equally likely to detect true murmurs, but only the simplicity method has no chance of detecting false murmurs
    corecore