2,128 research outputs found

    Fast detection of venous air embolism in Doppler heart sound using the wavelet transform

    Get PDF
    The introduction of air bubbles into the systemic circulation can result in significant morbidity. Real-time monitoring of continuous heart sound in patients detected by precordial Doppler ultrasound is, thus, vital for early detection of venous air embolism (VAE) during surgery. In this study, the multiscale feature of wavelet transforms (WT's) is exploited to examine the embolic Doppler heart sound (DHS) during intravenous air injections in dogs. As both humans and dogs share similar physiological conditions, the authors' methods and results for dogs are expected to be applicable to humans. The WT of DHS at scale 2 j(j=1,2) selectively magnified the power of embolic, but not the normal, heart sound. Statistically, the enhanced embolic power was found to be sensitive (P<0.01 at 0.01 ml of injected air) and correlated significantly (P<0.0005, Ď„=0.83) with the volume of injected air from 0.01 to 0.10 ml. A fast detection algorithm of O(N) complexity with unit complexity constant for VAE was developed (processing speed=8 ms per heartbeat), which confirmed the feasibility of real-time processing for both humans and dogs.published_or_final_versio

    Single-trial multiwavelet coherence in application to neurophysiological time series

    Get PDF
    A method of single-trial coherence analysis is presented, through the application of continuous muldwavelets. Multiwavelets allow the construction of spectra and bivariate statistics such as coherence within single trials. Spectral estimates are made consistent through optimal time-frequency localization and smoothing. The use of multiwavelets is considered along with an alternative single-trial method prevalent in the literature, with the focus being on statistical, interpretive and computational aspects. The multiwavelet approach is shown to possess many desirable properties, including optimal conditioning, statistical descriptions and computational efficiency. The methods. are then applied to bivariate surrogate and neurophysiological data for calibration and comparative study. Neurophysiological data were recorded intracellularly from two spinal motoneurones innervating the posterior,biceps muscle during fictive locomotion in the decerebrated cat

    Data-driven multivariate and multiscale methods for brain computer interface

    Get PDF
    This thesis focuses on the development of data-driven multivariate and multiscale methods for brain computer interface (BCI) systems. The electroencephalogram (EEG), the most convenient means to measure neurophysiological activity due to its noninvasive nature, is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its multichannel recording nature require a new set of data-driven multivariate techniques to estimate more accurately features for enhanced BCI operation. Also, a long term goal is to enable an alternative EEG recording strategy for achieving long-term and portable monitoring. Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary EEG signal into a set of components which are highly localised in time and frequency. It is shown that the complex and multivariate extensions of EMD, which can exploit common oscillatory modes within multivariate (multichannel) data, can be used to accurately estimate and compare the amplitude and phase information among multiple sources, a key for the feature extraction of BCI system. A complex extension of local mean decomposition is also introduced and its operation is illustrated on two channel neuronal spike streams. Common spatial pattern (CSP), a standard feature extraction technique for BCI application, is also extended to complex domain using the augmented complex statistics. Depending on the circularity/noncircularity of a complex signal, one of the complex CSP algorithms can be chosen to produce the best classification performance between two different EEG classes. Using these complex and multivariate algorithms, two cognitive brain studies are investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user attention to a sound source among a mixture of sound stimuli, which is aimed at improving the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments elicited by taste and taste recall are examined to determine the pleasure and displeasure of a food for the implementation of affective computing. The separation between two emotional responses is examined using real and complex-valued common spatial pattern methods. Finally, we introduce a novel approach to brain monitoring based on EEG recordings from within the ear canal, embedded on a custom made hearing aid earplug. The new platform promises the possibility of both short- and long-term continuous use for standard brain monitoring and interfacing applications

    Automatic analysis and classification of cardiac acoustic signals for long term monitoring

    Get PDF
    Objective: Cardiovascular diseases are the leading cause of death worldwide resulting in over 17.9 million deaths each year. Most of these diseases are preventable and treatable, but their progression and outcomes are significantly more positive with early-stage diagnosis and proper disease management. Among the approaches available to assist with the task of early-stage diagnosis and management of cardiac conditions, automatic analysis of auscultatory recordings is one of the most promising ones, since it could be particularly suitable for ambulatory/wearable monitoring. Thus, proper investigation of abnormalities present in cardiac acoustic signals can provide vital clinical information to assist long term monitoring. Cardiac acoustic signals, however, are very susceptible to noise and artifacts, and their characteristics vary largely with the recording conditions which makes the analysis challenging. Additionally, there are challenges in the steps used for automatic analysis and classification of cardiac acoustic signals. Broadly, these steps are the segmentation, feature extraction and subsequent classification of recorded signals using selected features. This thesis presents approaches using novel features with the aim to assist the automatic early-stage detection of cardiovascular diseases with improved performance, using cardiac acoustic signals collected in real-world conditions. Methods: Cardiac auscultatory recordings were studied to identify potential features to help in the classification of recordings from subjects with and without cardiac diseases. The diseases considered in this study for the identification of the symptoms and characteristics are the valvular heart diseases due to stenosis and regurgitation, atrial fibrillation, and splitting of fundamental heart sounds leading to additional lub/dub sounds in the systole or diastole interval of a cardiac cycle. The localisation of cardiac sounds of interest was performed using an adaptive wavelet-based filtering in combination with the Shannon energy envelope and prior information of fundamental heart sounds. This is a prerequisite step for the feature extraction and subsequent classification of recordings, leading to a more precise diagnosis. Localised segments of S1 and S2 sounds, and artifacts, were used to extract a set of perceptual and statistical features using wavelet transform, homomorphic filtering, Hilbert transform and mel-scale filtering, which were then fed to train an ensemble classifier to interpret S1 and S2 sounds. Once sound peaks of interest were identified, features extracted from these peaks, together with the features used for the identification of S1 and S2 sounds, were used to develop an algorithm to classify recorded signals. Overall, 99 features were extracted and statistically analysed using neighborhood component analysis (NCA) to identify the features which showed the greatest ability in classifying recordings. Selected features were then fed to train an ensemble classifier to classify abnormal recordings, and hyperparameters were optimized to evaluate the performance of the trained classifier. Thus, a machine learning-based approach for the automatic identification and classification of S1 and S2, and normal and abnormal recordings, in real-world noisy recordings using a novel feature set is presented. The validity of the proposed algorithm was tested using acoustic signals recorded in real-world, non-controlled environments at four auscultation sites (aortic valve, tricuspid valve, mitral valve, and pulmonary valve), from the subjects with and without cardiac diseases; together with recordings from the three large public databases. The performance metrics of the methodology in relation to classification accuracy (CA), sensitivity (SE), precision (P+), and F1 score, were evaluated. Results: This thesis proposes four different algorithms to automatically classify fundamental heart sounds – S1 and S2; normal fundamental sounds and abnormal additional lub/dub sounds recordings; normal and abnormal recordings; and recordings with heart valve disorders, namely the mitral stenosis (MS), mitral regurgitation (MR), mitral valve prolapse (MVP), aortic stenosis (AS) and murmurs, using cardiac acoustic signals. The results obtained from these algorithms were as follows: • The algorithm to classify S1 and S2 sounds achieved an average SE of 91.59% and 89.78%, and F1 score of 90.65% and 89.42%, in classifying S1 and S2, respectively. 87 features were extracted and statistically studied to identify the top 14 features which showed the best capabilities in classifying S1 and S2, and artifacts. The analysis showed that the most relevant features were those extracted using Maximum Overlap Discrete Wavelet Transform (MODWT) and Hilbert transform. • The algorithm to classify normal fundamental heart sounds and abnormal additional lub/dub sounds in the systole or diastole intervals of a cardiac cycle, achieved an average SE of 89.15%, P+ of 89.71%, F1 of 89.41%, and CA of 95.11% using the test dataset from the PASCAL database. The top 10 features that achieved the highest weights in classifying these recordings were also identified. • Normal and abnormal classification of recordings using the proposed algorithm achieved a mean CA of 94.172%, and SE of 92.38%, in classifying recordings from the different databases. Among the top 10 acoustic features identified, the deterministic energy of the sound peaks of interest and the instantaneous frequency extracted using the Hilbert Huang-transform, achieved the highest weights. • The machine learning-based approach proposed to classify recordings of heart valve disorders (AS, MS, MR, and MVP) achieved an average CA of 98.26% and SE of 95.83%. 99 acoustic features were extracted and their abilities to differentiate these abnormalities were examined using weights obtained from the neighborhood component analysis (NCA). The top 10 features which showed the greatest abilities in classifying these abnormalities using recordings from the different databases were also identified. The achieved results demonstrate the ability of the algorithms to automatically identify and classify cardiac sounds. This work provides the basis for measurements of many useful clinical attributes of cardiac acoustic signals and can potentially help in monitoring the overall cardiac health for longer duration. The work presented in this thesis is the first-of-its-kind to validate the results using both, normal and pathological cardiac acoustic signals, recorded for a long continuous duration of 5 minutes at four different auscultation sites in non-controlled real-world conditions.Open Acces

    A Comparison of Wavelet and Simplicity-Based Heart Sound and Murmur Segmentation Methods

    Get PDF
    Stethoscopes are the most commonly used medical devices for diagnosing heart conditions because they are inexpensive, noninvasive, and light enough to be carried around by a clinician. Auscultation with a stethoscope requires considerable skill and experience, but the introduction of digital stethoscopes allows for the automation of this task. Auscultation waveform segmentation, which is the process of determining the boundaries of heart sound and murmur segments, is the primary challenge in automating the diagnosis of various heart conditions. The purpose of this thesis is to improve the accuracy and efficiency of established techniques for detecting, segmenting, and classifying heart sounds and murmurs in digitized phonocardiogram audio files. Two separate segmentation techniques based on the discrete wavelet transform (DWT) and the simplicity transform are integrated into a MATLAB software system that is capable of automatically detecting and classifying sound segments. The performance of the two segmentation methods for recognizing normal heart sounds and several different heart murmurs is compared by quantifying the results with clinical and technical metrics. The two clinical metrics are the false negative detection rate (FNDR) and the false positive detection rate (FPDR), which count heart cycles rather than sound segments. The wavelet and simplicity methods have a 4% and 9% respective FNDR, so it is unlikely that either method would not detect a heart condition. However, the 22% and 0% respective FPDR signifies that the wavelet method is likely to detect false heart conditions, while the simplicity method is not. The two technical metrics are the true murmur detection rate (TMDR) and the false murmur detection rate (FMDR), which count sound segments rather than heart cycles. Both methods are equally likely to detect true murmurs given their 83% TMDR. However, the 13% and 0% respective FMDR implies that the wavelet method is susceptible to detecting false murmurs, while the simplicity method is not. Simplicity-based segmentation, therefore, demonstrates superior performance to wavelet-based segmentation, as both are equally likely to detect true murmurs, but only the simplicity method has no chance of detecting false murmurs
    • …
    corecore