142 research outputs found

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    NRC-Net: Automated noise robust cardio net for detecting valvular cardiac diseases using optimum transformation method with heart sound signals

    Full text link
    Cardiovascular diseases (CVDs) can be effectively treated when detected early, reducing mortality rates significantly. Traditionally, phonocardiogram (PCG) signals have been utilized for detecting cardiovascular disease due to their cost-effectiveness and simplicity. Nevertheless, various environmental and physiological noises frequently affect the PCG signals, compromising their essential distinctive characteristics. The prevalence of this issue in overcrowded and resource-constrained hospitals can compromise the accuracy of medical diagnoses. Therefore, this study aims to discover the optimal transformation method for detecting CVDs using noisy heart sound signals and propose a noise robust network to improve the CVDs classification performance.For the identification of the optimal transformation method for noisy heart sound data mel-frequency cepstral coefficients (MFCCs), short-time Fourier transform (STFT), constant-Q nonstationary Gabor transform (CQT) and continuous wavelet transform (CWT) has been used with VGG16. Furthermore, we propose a novel convolutional recurrent neural network (CRNN) architecture called noise robust cardio net (NRC-Net), which is a lightweight model to classify mitral regurgitation, aortic stenosis, mitral stenosis, mitral valve prolapse, and normal heart sounds using PCG signals contaminated with respiratory and random noises. An attention block is included to extract important temporal and spatial features from the noisy corrupted heart sound.The results of this study indicate that,CWT is the optimal transformation method for noisy heart sound signals. When evaluated on the GitHub heart sound dataset, CWT demonstrates an accuracy of 95.69% for VGG16, which is 1.95% better than the second-best CQT transformation technique. Moreover, our proposed NRC-Net with CWT obtained an accuracy of 97.4%, which is 1.71% higher than the VGG16

    Real-Time Virtual Pathology Using Signal Analysis and Synthesis

    Get PDF
    This dissertation discusses the modeling and simulation (M& S) research in the area of real-time virtual pathology using signal analysis and synthesis. The goal of this research is to contribute to the research in the M&S area of generating simulated outputs of medical diagnostics tools to supplement training of medical students with human patient role players. To become clinically competent physicians, medical students must become skilled in the areas of doctor-patient communication, eliciting the patient\u27s history, and performing the physical exam. The use of Standardized Patients (SPs), individuals trained to realistically portray patients, has become common practice. SPs provide the medical student with a means to learn in a safe, realistic setting, while providing a way to reliably test students\u27 clinical skills. The range of clinical problems an SP can portray, however, is limited. SPs are usually healthy individuals with few or no abnormal physical findings. Some SPs have been trained to simulate physical abnormalities, such as breathing through one lung, voluntarily and increasing blood pressure. But, there are many abnormalities that SPs cannot simulate. The research encompassed developing methods and algorithms to be incorporated into the previous work of McKenzie, el al. [1]–[3] for simulating abnormal heart sounds in a Standardized Patient (SP), which may be utilized in a modified electronic stethoscope. The methods and algorithms are specific to the real-time modeling of human body sounds through modifying the sounds from a real person with various abnormalities. The main focus of the research involved applying methods from tempo and beat analysis of acoustic musical signals for heart signal analysis, specifically in detecting the heart rate and heartbeat locations. In addition, the research included an investigation and selection of an adaptive noise cancellation filtering method to separate heart sounds from lung sounds. A model was developed to use a heart/lung sound signal as input to efficiently and accurately separate heart sound and lung sound signals, characterize the heart sound signal when appropriate, replace the heart or lung sound signal with a reference pathology signal containing an abnormality such as a crackle or murmur, and then recombine the original heart or lung sound signal with the modified pathology signal for presentation to the student. After completion of the development of the model, the model was validated. The validation included both a qualitative assessment and a quantitative assessment. The qualitative assessment drew on the visual and auditory analysis of SMEs, and the quantitative assessment utilized simulated data to verify key portions of the model

    Synthesis of normal and abnormal heart sounds using Generative Adversarial Networks

    Get PDF
    En esta tesis doctoral se presentan diferentes métodos propuestos para el análisis y síntesis de sonidos cardíacos normales y anormales, logrando los siguientes aportes al estado del arte: i) Se implementó un algoritmo basado en la transformada wavelet empírica (EWT) y la energía promedio normalizada de Shannon (NASE) para mejorar la etapa de segmentación automática de los sonidos cardíacos; ii) Se implementaron diferentes técnicas de extracción de características para las señales cardíacas utilizando los coeficientes cepstrales de frecuencia Mel (MFCC), los coeficientes de predicción lineal (LPC) y los valores de potencia. Además, se probaron varios modelos de Machine Learning para la clasificación automática de sonidos cardíacos normales y anormales; iii) Se diseñó un modelo basado en Redes Adversarias Generativas (GAN) para generar sonidos cardíacos sintéticos normales. Además, se implementa un algoritmo de eliminación de ruido utilizando EWT, lo que permite una disminución en la cantidad de épocas y el costo computacional que requiere el modelo GAN; iv) Finalmente, se propone un modelo basado en la arquitectura GAN, que consiste en refinar señales cardíacas sintéticas obtenidas por un modelo matemático con características de señales cardíacas reales. Este modelo se ha denominado FeaturesGAN y no requiere una gran base de datos para generar diferentes tipos de sonidos cardíacos. Cada uno de estos aportes fueron validados con diferentes métodos objetivos y comparados con trabajos publicados en el estado del arte, obteniendo resultados favorables.DoctoradoDoctor en Ingeniería Eléctrica y Electrónic

    Algoritmos de procesado de señal basados en Non-negative Matrix Factorization aplicados a la separación, detección y clasificación de sibilancias en señales de audio respiratorias monocanal

    Get PDF
    La auscultación es el primer examen clínico que un médico lleva a cabo para evaluar el estado del sistema respiratorio, debido a que es un método no invasivo, de bajo coste, fácil de realizar y seguro para el paciente. Sin embargo, el diagnóstico que se deriva de la auscultación sigue siendo un diagnóstico subjetivo que se encuentra condicionado a la habilidad, experiencia y entrenamiento de cada médico en la escucha e interpretación de las señales de audio respiratorias. En consecuencia, se producen un alto porcentaje de diagnósticos erróneos que ponen en riesgo la salud de los pacientes e incrementan el coste asociado a los centros de salud. Esta Tesis propone nuevos métodos basados en Non-negative Matrix Factorization aplicados a la separación, detección y clasificación de sonidos sibilantes para proporcionar una vía de información complementaria al médico que ayude a mejorar la fiabilidad del diagnóstico emitido por el especialista. Auscultation is the first clinical examination that a physician performs to evaluate the condition of the respiratory system, because it is a non-invasive, low-cost, easy-to-perform and safe method for the patient. However, the diagnosis derived from auscultation remains a subjective diagnosis that is conditioned by the ability, experience and training of each physician in the listening and interpretation of respiratory audio signals. As a result, a high percentage of misdiagnoses are produced that endanger the health of patients and increase the cost associated with health centres. This Thesis proposes new methods based on Non-negative Matrix Factorization applied to separation, detection and classification of wheezing sounds in order to provide a complementary information pathway to the physician that helps to improve the reliability of the diagnosis made by the doctor.Tesis Univ. Jaén. Departamento INGENIERÍA DE TELECOMUNICACIÓ

    Remote measurements of heart valve sounds for health assessment and biometric identification

    Get PDF
    Heart failure will contribute to the death of one in three people who read this thesis; and one in three of those who don't. Although in order to diagnose patients’ heart condition cardiologists have access to electrocardiograms, chest X-rays, ultrasound imaging, MRI, Doppler techniques, angiography, and transesophageal echocardiography, these diagnostic techniques require a cardiologist’s visit, are expensive, the examination time is long and so are the waiting lists. Furthermore abnormal events might be sporadic and thus constant monitoring would be needed to avoid fatalities. Therefore in this thesis we propose a cost effective device which can constantly monitor the heart condition based on the principles of phonocardiography, which is a cost-effective method which records heart sounds. Manual auscultation is not widely used to diagnose because it requires considerable training, it relies on the hearing abilities of the clinician and specificity and sensitivity for manual auscultation are low since results are qualitative and not reproducible. However we propose a cheap laser-based device which is contactless and can constantly monitor patients’ heart sounds with a better SNR than the digital stethoscope. We also propose a Machine Learning (ML) aided software trained on data acquired with our device which can classify healthy from unhealthy heart sounds and can perform biometric authentication. This device might allow development of gadgets for remote monitoring of cardiovascular health in different settings

    Assessment of Dual-Tree Complex Wavelet Transform to improve SNR in collaboration with Neuro-Fuzzy System for Heart Sound Identification

    Get PDF
    none6siThe research paper proposes a novel denoising method to improve the outcome of heartsound (HS)-based heart-condition identification by applying the dual-tree complex wavelet transform (DTCWT) together with the adaptive neuro-fuzzy inference System (ANFIS) classifier. The method consists of three steps: first, preprocessing to eliminate 50 Hz noise; second, applying four successive levels of DTCWT to denoise and reconstruct the time-domain HS signal; third, to evaluate ANFIS on a total of 2735 HS recordings from an international dataset (PhysioNet Challenge 2016). The results show that the signal-to-noise ratio (SNR) with DTCWT was significantly improved (p < 0.001) as compared to original HS recordings. Quantitatively, there was an 11% to many decibel (dB)-fold increase in SNR after DTCWT, representing a significant improvement in denoising HS. In addition, the ANFIS, using six time-domain features, resulted in 55–86% precision, 51–98% recall, 53–86% f-score, and 54–86% MAcc compared to other attempts on the same dataset. Therefore, DTCWT is a successful technique in removing noise from biosignals such as HS recordings. The adaptive property of ANFIS exhibited capability in classifying HS recordings.Special Issue “Biomedical Signal Processing”, Section BioelectronicsopenBassam Al-Naami, Hossam Fraihat, Jamal Al-Nabulsi, Nasr Y. Gharaibeh, Paolo Visconti, Abdel-Razzak Al-HinnawiAl-Naami, Bassam; Fraihat, Hossam; Al-Nabulsi, Jamal; Gharaibeh, Nasr Y.; Visconti, Paolo; Al-Hinnawi, Abdel-Razza
    corecore