124 research outputs found

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    ECG analysis and classification using CSVM, MSVM and SIMCA classifiers

    Get PDF
    Reliable ECG classification can potentially lead to better detection methods and increase accurate diagnosis of arrhythmia, thus improving quality of care. This thesis investigated the use of two novel classification algorithms: CSVM and SIMCA, and assessed their performance in classifying ECG beats. The project aimed to introduce a new way to interactively support patient care in and out of the hospital and develop new classification algorithms for arrhythmia detection and diagnosis. Wave (P-QRS-T) detection was performed using the WFDB Software Package and multiresolution wavelets. Fourier and PCs were selected as time-frequency features in the ECG signal; these provided the input to the classifiers in the form of DFT and PCA coefficients. ECG beat classification was performed using binary SVM. MSVM, CSVM, and SIMCA; these were subsequently used for simultaneously classifying either four or six types of cardiac conditions. Binary SVM classification with 100% accuracy was achieved when applied on feature-reduced ECG signals from well-established databases using PCA. The CSVM algorithm and MSVM were used to classify four ECG beat types: NORMAL, PVC, APC, and FUSION or PFUS; these were from the MIT-BIH arrhythmia database (precordial lead group and limb lead II). Different numbers of Fourier coefficients were considered in order to identify the optimal number of features to be presented to the classifier. SMO was used to compute hyper-plane parameters and threshold values for both MSVM and CSVM during the classifier training phase. The best classification accuracy was achieved using fifty Fourier coefficients. With the new CSVM classifier framework, accuracies of 99%, 100%, 98%, and 99% were obtained using datasets from one, two, three, and four precordial leads, respectively. In addition, using CSVM it was possible to successfully classify four types of ECG beat signals extracted from limb lead simultaneously with 97% accuracy, a significant improvement on the 83% accuracy achieved using the MSVM classification model. In addition, further analysis of the following four beat types was made: NORMAL, PVC, SVPB, and FUSION. These signals were obtained from the European ST-T Database. Accuracies between 86% and 94% were obtained for MSVM and CSVM classification, respectively, using 100 Fourier coefficients for reconstructing individual ECG beats. Further analysis presented an effective ECG arrhythmia classification scheme consisting of PCA as a feature reduction method and a SIMCA classifier to differentiate between either four or six different types of arrhythmia. In separate studies, six and four types of beats (including NORMAL, PVC, APC, RBBB, LBBB, and FUSION beats) with time domain features were extracted from the MIT-BIH arrhythmia database and the St Petersburg INCART 12-lead Arrhythmia Database (incartdb) respectively. Between 10 and 30 PCs, coefficients were selected for reconstructing individual ECG beats in the feature selection phase. The average classification accuracy of the proposed scheme was 98.61% and 97.78 % using the limb lead and precordial lead datasets, respectively. In addition, using MSVM and SIMCA classifiers with four ECG beat types achieved an average classification accuracy of 76.83% and 98.33% respectively. The effectiveness of the proposed algorithms was finally confirmed by successfully classifying both the six beat and four beat types of signal respectively with a high accuracy ratio

    Classification of De novo post-operative and persistent atrial fibrillation using multi-channel ECG recordings

    Get PDF
    Atrial fibrillation (AF) is the most sustained arrhythmia in the heart and also the most common complication developed after cardiac surgery. Due to its progressive nature, timely detection of AF is important. Currently, physicians use a surface electrocardiogram (ECG) for AF diagnosis. However, when the patient develops AF, its various development stages are not distinguishable for cardiologists based on visual inspection of the surface ECG signals. Therefore, severity detection of AF could start from differentiating between short-lasting AF and long-lasting AF. Here, de novo post-operative AF (POAF) is a good model for short-lasting AF while long-lasting AF can be represented by persistent AF. Therefore, we address in this paper a binary severity detection of AF for two specific types of AF. We focus on the differentiation of these two types as de novo POAF is the first time that a patient develops AF. Hence, comparing its development to a more severe stage of AF (e.g., persistent AF) could be beneficial in unveiling the electrical changes in the atrium. To the best of our knowledge, this is the first paper that aims to differentiate these different AF stages. We propose a method that consists of three sets of discriminative features based on fundamentally different aspects of the multi-channel ECG data, namely based on the analysis of RR intervals, a greyscale image representation of the vectorcardiogram, and the frequency domain representation of the ECG. Due to the nature of AF, these features are able to capture both morphological and rhythmic changes in the ECGs. Our classification system consists of a random forest classifier, after a feature selection stage using the ReliefF method. The detection efficiency is tested on 151 patients using 5-fold cross-validation. We achieved 89.07% accuracy in the classification of de novo POAF and persistent AF. The results show that the features are discriminative to reveal the severity of AF. Moreover, inspection of the most important features sheds light on the different characteristics of de novo post-operative and persistent AF.</p

    Non-linear dynamical analysis of biosignals

    Get PDF
    Biosignals are physiological signals that are recorded from various parts of the body. Some of the major biosignals are electromyograms (EMG), electroencephalograms (EEG) and electrocardiograms (ECG). These signals are of great clinical and diagnostic importance, and are analysed to understand their behaviour and to extract maximum information from them. However, they tend to be random and unpredictable in nature (non-linear). Conventional linear methods of analysis are insufficient. Hence, analysis using non-linear and dynamical system theory, chaos theory and fractal dimensions, is proving to be very beneficial. In this project, ECG signals are of interest. Changes in the normal rhythm of a human heart may result in different cardiac arrhythmias, which may be fatal or cause irreparable damage to the heart when sustained over long periods of time. Hence the ability to identify arrhythmias from ECG recordings is of importance for clinical diagnosis and treatment and also for understanding the electrophysiological mechanism of arrhythmias. To achieve this aim, algorithms were developed with the help of MATLAB® software. The classical logic of correlation was used in the development of algorithms to place signals into the various categories of cardiac arrhythmias. A sample set of 35 known ECG signals were obtained from the Physionet website for testing purposes. Later, 5 unknown ECG signals were used to determine the efficiency of the algorithms. A peak detection algorithm was written to detect the QRS complex. This complex is the most prominent waveform within an ECG signal and its shape, duration and time of occurrence provides valuable information about the current state of the heart. The peak detection algorithm gave excellent results with very good accuracy for all the downloaded ECG signals, and was developed using classical linear techniques. Later, a peak detection algorithm using the discrete wavelet transform (DWT) was implemented. This code was developed using nonlinear techniques and was amenable for implementation. Also, the time required for execution was reduced, making this code ideal for real-time processing. Finally, algorithms were developed to calculate the Kolmogorov complexity and Lyapunov exponent, which are nonlinear descriptors and enable the randomness and chaotic nature of ECG signals to be estimated. These measures of randomness and chaotic nature enable us to apply correct interrogative methods to the signal to extract maximum information. The codes developed gave fair results. It was possible to differentiate between normal ECGs and ECGs with ventricular fibrillation. The results show that the Kolmogorov complexity measure increases with an increase in pathology, approximately 12.90 for normal ECGs and increasing to 13.87 to 14.39 for ECGs with ventricular fibrillation and ventricular tachycardia. Similar results were obtained for Lyapunov exponent measures with a notable difference between normal ECG (0 – 0.0095) and ECG with ventricular fibrillation (0.1114 – 0.1799). However, it was difficult to differentiate between different types of arrhythmias.Biosignals are physiological signals that are recorded from various parts of the body. Some of the major biosignals are electromyograms (EMG), electroencephalograms (EEG) and electrocardiograms (ECG). These signals are of great clinical and diagnostic importance, and are analysed to understand their behaviour and to extract maximum information from them. However, they tend to be random and unpredictable in nature (non-linear). Conventional linear methods of analysis are insufficient. Hence, analysis using non-linear and dynamical system theory, chaos theory and fractal dimensions, is proving to be very beneficial. In this project, ECG signals are of interest. Changes in the normal rhythm of a human heart may result in different cardiac arrhythmias, which may be fatal or cause irreparable damage to the heart when sustained over long periods of time. Hence the ability to identify arrhythmias from ECG recordings is of importance for clinical diagnosis and treatment and also for understanding the electrophysiological mechanism of arrhythmias. To achieve this aim, algorithms were developed with the help of MATLAB® software. The classical logic of correlation was used in the development of algorithms to place signals into the various categories of cardiac arrhythmias. A sample set of 35 known ECG signals were obtained from the Physionet website for testing purposes. Later, 5 unknown ECG signals were used to determine the efficiency of the algorithms. A peak detection algorithm was written to detect the QRS complex. This complex is the most prominent waveform within an ECG signal and its shape, duration and time of occurrence provides valuable information about the current state of the heart. The peak detection algorithm gave excellent results with very good accuracy for all the downloaded ECG signals, and was developed using classical linear techniques. Later, a peak detection algorithm using the discrete wavelet transform (DWT) was implemented. This code was developed using nonlinear techniques and was amenable for implementation. Also, the time required for execution was reduced, making this code ideal for real-time processing. Finally, algorithms were developed to calculate the Kolmogorov complexity and Lyapunov exponent, which are nonlinear descriptors and enable the randomness and chaotic nature of ECG signals to be estimated. These measures of randomness and chaotic nature enable us to apply correct interrogative methods to the signal to extract maximum information. The codes developed gave fair results. It was possible to differentiate between normal ECGs and ECGs with ventricular fibrillation. The results show that the Kolmogorov complexity measure increases with an increase in pathology, approximately 12.90 for normal ECGs and increasing to 13.87 to 14.39 for ECGs with ventricular fibrillation and ventricular tachycardia. Similar results were obtained for Lyapunov exponent measures with a notable difference between normal ECG (0 – 0.0095) and ECG with ventricular fibrillation (0.1114 – 0.1799). However, it was difficult to differentiate between different types of arrhythmias

    A real-time data mining technique applied for critical ECG rhythm on handheld device

    Get PDF
    Sudden cardiac arrest is often caused by ventricular arrhythmias and these episodes can lead to death for patients with chronic heart disease. Hence, detection of such arrhythmia is crucial in mobile ECG monitoring. In this research, a systematic study is carried out to investigate the possible limitations that are preventing the realisation of a real-time ECG arrhythmia data-mining algorithm suitable for application on mobile devices. Based on the findings, a computationally lightweight algorithm is devised and tested. Ventricular tachycardia (VT) is the most common type of ventricular arrhythmias and is also the deadliest.. A ventricular tachycardia (VT) episode is due to a disorder ofthe regular contractions ofthe heart. It occurs when the human heart ventricles generate a rapid heartbeat which disrupts the regular physiology cycle. The normal sinus rhythm (NSR) of a regular human heart beat signal has its signature PQRST waveform and in regular pattern. Whereas, the characteristics of a ventricular tachycardia (VT) signal waveforms are short R-R intervals, widen QRS duration and the absence of P-waves. Each type of ECG arrhythmia previously mentioned has a unique waveform signature that can be exploited as features to be used for the realization of an automated ECG analysis application. In order to extract this known ECG waveform feature, a time-domain analysis is proposed for feature extraction. Cross-correlation allows the computation of a co-efficient that quantifies the similarity between two times-series. Hence, by cross-correlating known ECG waveform templates with an unknown ECG signal, the coefficient can indicate the similarities. In previous published work, a preliminary study was carried out. The cross-correlation coefficient wave (CCW) technique was introduced for feature extraction. The outcome ofthis work presents CCW as a promising feature to differentiate between NSR, VT and Vfib signals. Moreover, cross-correlation computation does not require high computational overhead. Next, an automated detection algorithm requires a classification mechanism to make sense of the feature extracted. A further study is conducted and published, a fuzzy set k-NN classifier was introduced for the classification of CCW feature extracted from ECG signal segments. A training set of size 180 is used. The outcome of the study indicates that the computationally light-weight fuzzy k-NN classifier can reliably classify between NSR and VT signals, the class detection rate is low for classifying Vfib signal using the fuzzy k-NN classifier. Hence, a modified algorithm known as fuzzy hybrid classifier is proposed. By implementing an expert knowledge based fuzzy inference system for classification of ECG signal; the Vfib signal detection rate was improved. The comparison outcome was that the hybrid fuzzy classifier is able to achieve 91.1% correct rate, 100% sensitivity and 100% specificity. The previously mentioned result outperforms the compared classifiers. The proposed detection and classification algorithm is able to achieve high accuracy in analysing ECG signal feature of NSR, VT and Vfib nature. Moreover, the proposed classifier is successfully implemented on a smart mobile device and it is able to perform data-mining of the ECG signal with satisfiable results

    Optimized Biosignals Processing Algorithms for New Designs of Human Machine Interfaces on Parallel Ultra-Low Power Architectures

    Get PDF
    The aim of this dissertation is to explore Human Machine Interfaces (HMIs) in a variety of biomedical scenarios. The research addresses typical challenges in wearable and implantable devices for diagnostic, monitoring, and prosthetic purposes, suggesting a methodology for tailoring such applications to cutting edge embedded architectures. The main challenge is the enhancement of high-level applications, also introducing Machine Learning (ML) algorithms, using parallel programming and specialized hardware to improve the performance. The majority of these algorithms are computationally intensive, posing significant challenges for the deployment on embedded devices, which have several limitations in term of memory size, maximum operative frequency, and battery duration. The proposed solutions take advantage of a Parallel Ultra-Low Power (PULP) architecture, enhancing the elaboration on specific target architectures, heavily optimizing the execution, exploiting software and hardware resources. The thesis starts by describing a methodology that can be considered a guideline to efficiently implement algorithms on embedded architectures. This is followed by several case studies in the biomedical field, starting with the analysis of a Hand Gesture Recognition, based on the Hyperdimensional Computing algorithm, which allows performing a fast on-chip re-training, and a comparison with the state-of-the-art Support Vector Machine (SVM); then a Brain Machine Interface (BCI) to detect the respond of the brain to a visual stimulus follows in the manuscript. Furthermore, a seizure detection application is also presented, exploring different solutions for the dimensionality reduction of the input signals. The last part is dedicated to an exploration of typical modules for the development of optimized ECG-based applications

    Heartwave biometric authentication using machine learning algorithms

    Get PDF
    PhD ThesisThe advancement of IoT, cloud services and technologies have prompted heighten IT access security. Many products and solutions have implemented biometric solution to address the security concern. Heartwave as biometric mode offers the potential due to the inability to falsify the signal and ease of signal acquisition from fingers. However the highly variated heartrate signal, due to heartrate has imposed much headwinds in the development of heartwave based biometric authentications. The thesis first review the state-of-the-arts in the domains of heartwave segmentation and feature extraction, and identifying discriminating features and classifications. In particular this thesis proposed a methodology of Discrete Wavelet Transformation integrated with heartrate dependent parameters to extract discriminating features reliably and accurately. In addition, statistical methodology using Gaussian Mixture Model-Hidden Markov Model integrated with user specific threshold and heartrate have been proposed and developed to provide classification of individual under varying heartrates. This investigation has led to the understanding that individual discriminating feature is a variable against heartrate. Similarly, the neural network based methodology leverages on ensemble-Deep Belief Network (DBN) with stacked DBN coded using Multiview Spectral Embedding has been explored and achieved good performance in classification. Importantly, the amount of data required for training is significantly reduce

    Biometrics

    Get PDF
    Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book
    • …
    corecore