298 research outputs found

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    Machine learning for the classification of atrial fibrillation utilizing seismo- and gyrocardiogram

    Get PDF
    A significant number of deaths worldwide are attributed to cardiovascular diseases (CVDs), accounting for approximately one-third of the total mortality in 2019, with an estimated 18 million deaths. The prevalence of CVDs has risen due to the increasing elderly population and improved life expectancy. Consequently, there is an escalating demand for higher-quality healthcare services. Technological advancements, particularly the use of wearable devices for remote patient monitoring, have significantly improved the diagnosis, treatment, and monitoring of CVDs. Atrial fibrillation (AFib), an arrhythmia associated with severe complications and potential fatality, necessitates prolonged monitoring of heart activity for accurate diagnosis and severity assessment. Remote heart monitoring, facilitated by ECG Holter monitors, has become a popular approach in many cardiology clinics. However, in the absence of an ECG Holter monitor, other remote and widely available technologies can prove valuable. The seismo- and gyrocardiogram signals (SCG and GCG) provide information about the mechanical function of the heart, enabling AFib monitoring within or outside clinical settings. SCG and GCG signals can be conveniently recorded using smartphones, which are affordable and ubiquitous in most countries. This doctoral thesis investigates the utilization of signal processing, feature engineering, and supervised machine learning techniques to classify AFib using short SCG and GCG measurements captured by smartphones. Multiple machine learning pipelines are examined, each designed to address specific objectives. The first objective (O1) involves evaluating the performance of supervised machine learning classifiers in detecting AFib using measurements conducted by physicians in a clinical setting. The second objective (O2) is similar to O1, but this time utilizing measurements taken by patients themselves. The third objective (03) explores the performance of machine learning classifiers in detecting acute decompensated heart failure (ADHF) using the same measurements as O1, which were primarily collected for AFib detection. Lastly, the fourth objective (O4) delves into the application of deep neural networks for automated feature learning and classification of AFib. These investigations have shown that AFib detection is achievable by capturing a joint SCG and GCG recording and applying machine learning methods, yielding satisfactory performance outcomes. The primary focus of the examined approaches encompassed (1) feature engineering coupled with supervised classification, and (2) iv automated end-to-end feature learning and classification using deep convolutionalrecurrent neural networks. The key finding from these studies is that SCG and GCG signals reliably capture the heart’s beating pattern, irrespective of the operator. This allows for the detection of irregular rhythm patterns, making this technology suitable for monitoring AFib episodes outside of hospital settings as a remote monitoring solution for individuals suspected to have AFib. This thesis demonstrates the potential of smartphone-based AFib detection using built-in inertial sensors. Notably, a short recording duration of 10 to 60 seconds yields clinically relevant results. However, it is important to recognize that the results for ADHF did not match the state-of-the-art achievements due to the limited availability of ADHF data combined with arrhythmias as well as the lack of a cardiopulmonary exercise test in the measurement setting. Finally, it is important to recognize that SCG and GCG are not intended to replace clinical ECG measurements or long-term ambulatory Holter ECG recordings. Instead, within the scope of our current understanding, they should be regarded as complementary and supplementary technologies for cardiovascular monitoring

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    ECG classification using deep CNN improved by wavelet transform

    Full text link
    © 2020 Tech Science Press. All rights reserved. Atrial fibrillation is the most common persistent form of arrhythmia. A method based on wavelet transform combined with deep convolutional neural network is applied for automatic classification of electrocardiograms. Since the ECG signal is easily inferred, the ECG signal is decomposed into 9 kinds of subsignals with different frequency scales by wavelet function, and then wavelet reconstruction is carried out after segmented filtering to eliminate the influence of noise. A 24-layer convolution neural network is used to extract the hierarchical features by convolution kernels of different sizes, and finally the softmax classifier is used to classify them. This paper applies this method of the ECG data set provided by the 2017 PhysioNet/CINC challenge. After cross validation, this method can obtain 87.1% accuracy and the F1 score is 86.46%. Compared with the existing classification method, our proposed algorithm has higher accuracy and generalization ability for ECG signal data classification

    Nonlinear Stochastic Modeling and Analysis of Cardiovascular System Dynamics - Diagnostic and Prognostic Applications

    Get PDF
    The purpose of this investigation is to develop monitoring, diagnostic and prognostic schemes for cardiovascular diseases by studying the nonlinear stochastic dynamics underlying complex heart system. The employment of a nonlinear stochastic analysis combined with wavelet representations can extract effective cardiovascular features, which will be more sensitive to the pathological dynamics instead of the extraneous noises. While conventional statistical and linear systemic approaches have limitations for capturing signal variations resulting from changes in the cardiovascular system states. The research methodology includes signal representation using optimal wavelet function design, feature extraction using nonlinear recurrence analysis, and local recurrence modeling for state prediction.Industrial Engineering & Managemen

    Assessment of Dual-Tree Complex Wavelet Transform to improve SNR in collaboration with Neuro-Fuzzy System for Heart Sound Identification

    Get PDF
    none6siThe research paper proposes a novel denoising method to improve the outcome of heartsound (HS)-based heart-condition identification by applying the dual-tree complex wavelet transform (DTCWT) together with the adaptive neuro-fuzzy inference System (ANFIS) classifier. The method consists of three steps: first, preprocessing to eliminate 50 Hz noise; second, applying four successive levels of DTCWT to denoise and reconstruct the time-domain HS signal; third, to evaluate ANFIS on a total of 2735 HS recordings from an international dataset (PhysioNet Challenge 2016). The results show that the signal-to-noise ratio (SNR) with DTCWT was significantly improved (p < 0.001) as compared to original HS recordings. Quantitatively, there was an 11% to many decibel (dB)-fold increase in SNR after DTCWT, representing a significant improvement in denoising HS. In addition, the ANFIS, using six time-domain features, resulted in 55–86% precision, 51–98% recall, 53–86% f-score, and 54–86% MAcc compared to other attempts on the same dataset. Therefore, DTCWT is a successful technique in removing noise from biosignals such as HS recordings. The adaptive property of ANFIS exhibited capability in classifying HS recordings.Special Issue “Biomedical Signal Processing”, Section BioelectronicsopenBassam Al-Naami, Hossam Fraihat, Jamal Al-Nabulsi, Nasr Y. Gharaibeh, Paolo Visconti, Abdel-Razzak Al-HinnawiAl-Naami, Bassam; Fraihat, Hossam; Al-Nabulsi, Jamal; Gharaibeh, Nasr Y.; Visconti, Paolo; Al-Hinnawi, Abdel-Razza

    FusionSense: Emotion Classification using Feature Fusion of Multimodal Data and Deep learning in a Brain-inspired Spiking Neural Network

    Get PDF
    Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem.Peer reviewe

    Artifact Removal Methods in EEG Recordings: A Review

    Get PDF
    To obtain the correct analysis of electroencephalogram (EEG) signals, non-physiological and physiological artifacts should be removed from EEG signals. This study aims to give an overview on the existing methodology for removing physiological artifacts, e.g., ocular, cardiac, and muscle artifacts. The datasets, simulation platforms, and performance measures of artifact removal methods in previous related research are summarized. The advantages and disadvantages of each technique are discussed, including regression method, filtering method, blind source separation (BSS), wavelet transform (WT), empirical mode decomposition (EMD), singular spectrum analysis (SSA), and independent vector analysis (IVA). Also, the applications of hybrid approaches are presented, including discrete wavelet transform - adaptive filtering method (DWT-AFM), DWT-BSS, EMD-BSS, singular spectrum analysis - adaptive noise canceler (SSA-ANC), SSA-BSS, and EMD-IVA. Finally, a comparative analysis for these existing methods is provided based on their performance and merits. The result shows that hybrid methods can remove the artifacts more effectively than individual methods
    • …
    corecore