603 research outputs found
A Combined Model for Noise Reduction of Lung Sound Signals Based on Empirical Mode Decomposition and Artificial Neural Network
Computer analysis of Lung Sound (LS) signals has been proposed in recent
years as a tool to analyze the lungs' status but there have always been main
challenges, including the contamination of LS with environmental noises, which
come from different sources of unlike intensities. One of the common methods in
noise reduction of LS signals is based on thresholding on Discrete Wavelet
Transform (DWT) coefficients or Empirical Mode Decomposition (EMD) of the
signal, however, in these methods, it is necessary to calculate the SNR value
to determine the appropriate threshold for noise removal. To solve this
problem, a combined model based on EMD and Artificial Neural Network (ANN)
trained with different SNRs (0, 5, 10, 15, and 20dB) is proposed in this
research. The model can denoise white and pink noises in the range of -2 to
20dB without thresholding or even estimating SNR, and at the same time, keep
the main content of the LS signal well. The proposed method is also compared
with the EMD-custom method, and the results obtained from the SNR, and fit
criteria indicate the absolute superiority of the proposed method. For example,
at SNR = 0dB, the combined method can improve the SNR by 9.41 and 8.23dB for
white and pink noises, respectively, while the corresponding values are
respectively 5.89 and 4.31dB for the EMD-Custom method
Improving Temporal Accuracy of Human Metabolic Chambers for Dynamic Metabolic Studies
Metabolic chambers are powerful tools for assessing human energy expenditure, providing flexibility and comfort for the subjects in a near free-living environment. However, the flexibility offered by the large living room size creates challenges in the assessment of dynamic human metabolic signalsâsuch as those generated during high-intensity interval training and short-term involuntary physical activitiesâwith sufficient temporal accuracy. Therefore, this paper presents methods to improve the temporal accuracy of metabolic chambers. The proposed methods include 1) adopting a shortest possible step size, here one minute, to compute the finite derivative terms for the metabolic rate calculation, and 2) applying a robust noise reduction methodâtotal variation denoisingâto minimize the large noise generated by the short derivative term whilst preserving the transient edges of the dynamic metabolic signals. Validated against 24-hour gas infusion tests, the proposed method reconstructs dynamic metabolic signals with the best temporal accuracy among state-of-the-art approaches, achieving a root mean square error of 0.27 kcal/min (18.8 J/s), while maintaining a low cumulative error in 24-hour total energy expenditure of less than 45 kcal/day (188280 J/day). When applied to a human exercise session, the proposed methods also show the best performance in terms of recovering the dynamics of exercise energy expenditure. Overall, the proposed methods improve the temporal resolution of the chamber system, enabling metabolic studies involving dynamic signals such as short interval exercises to carry out the metabolic chambers
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
Quality and denoising in real-time functional magnetic resonance imaging neurofeedback: A methods review
First published: 25 April 2020Neurofeedback training using real-time functional magnetic resonance imaging
(rtfMRI-NF) allows subjects voluntary control of localised and distributed brain activity.
It has sparked increased interest as a promising non-invasive treatment option in
neuropsychiatric and neurocognitive disorders, although its efficacy and clinical significance
are yet to be determined. In this work, we present the first extensive review
of acquisition, processing and quality control methods available to improve the quality
of the neurofeedback signal. Furthermore, we investigate the state of denoising
and quality control practices in 128 recently published rtfMRI-NF studies. We found:
(a) that less than a third of the studies reported implementing standard real-time
fMRI denoising steps, (b) significant room for improvement with regards to methods
reporting and (c) the need for methodological studies quantifying and comparing the
contribution of denoising steps to the neurofeedback signal quality. Advances in
rtfMRI-NF research depend on reproducibility of methods and results. Notably, a systematic
effort is needed to build up evidence that disentangles the various mechanisms
influencing neurofeedback effects. To this end, we recommend that future
rtfMRI-NF studies: (a) report implementation of a set of standard real-time fMRI denoising
steps according to a proposed COBIDAS-style checklist (https://osf.io/kjwhf/),
(b) ensure the quality of the neurofeedback signal by calculating and reporting
community-informed quality metrics and applying offline control checks and (c) strive
to adopt transparent principles in the form of methods and data sharing and support
of open-source rtfMRI-NF software. Code and data for reproducibility, as well as an
interactive environment to explore the study data, can be accessed at https://github.
com/jsheunis/quality-and-denoising-in-rtfmri-nf.LSHâTKI, Grant/Award Number: LSHM16053âSGF; Philips Researc
Signal Processing Using Non-invasive Physiological Sensors
Non-invasive biomedical sensors for monitoring physiological parameters from the human body for potential future therapies and healthcare solutions. Today, a critical factor in providing a cost-effective healthcare system is improving patients' quality of life and mobility, which can be achieved by developing non-invasive sensor systems, which can then be deployed in point of care, used at home or integrated into wearable devices for long-term data collection. Another factor that plays an integral part in a cost-effective healthcare system is the signal processing of the data recorded with non-invasive biomedical sensors. In this book, we aimed to attract researchers who are interested in the application of signal processing methods to different biomedical signals, such as an electroencephalogram (EEG), electromyogram (EMG), functional near-infrared spectroscopy (fNIRS), electrocardiogram (ECG), galvanic skin response, pulse oximetry, photoplethysmogram (PPG), etc. We encouraged new signal processing methods or the use of existing signal processing methods for its novel application in physiological signals to help healthcare providers make better decisions
Single channel speech enhancement by colored spectrograms
Speech enhancement concerns the processes required to remove unwanted
background sounds from the target speech to improve its quality and
intelligibility. In this paper, a novel approach for single-channel speech
enhancement is presented, using colored spectrograms. We propose the use of a
deep neural network (DNN) architecture adapted from the pix2pix generative
adversarial network (GAN) and train it over colored spectrograms of speech to
denoise them. After denoising, the colors of spectrograms are translated to
magnitudes of short-time Fourier transform (STFT) using a shallow regression
neural network. These estimated STFT magnitudes are later combined with the
noisy phases to obtain an enhanced speech. The results show an improvement of
almost 0.84 points in the perceptual evaluation of speech quality (PESQ) and 1%
in the short-term objective intelligibility (STOI) over the unprocessed noisy
data. The gain in quality and intelligibility over the unprocessed signal is
almost equal to the gain achieved by the baseline methods used for comparison
with the proposed model, but at a much reduced computational cost. The proposed
solution offers a comparative PESQ score at almost 10 times reduced
computational cost than a similar baseline model that has generated the highest
PESQ score trained on grayscaled spectrograms, while it provides only a 1%
deficit in STOI at 28 times reduced computational cost when compared to another
baseline system based on convolutional neural network-GAN (CNN-GAN) that
produces the most intelligible speech.Comment: 18 pages, 6 figures, 5 table
Advanced sensing technologies and systems for lung function assessment
Chest X-rays and computed tomography scans are highly accurate lung assessment tools, but their hazardous nature and high cost remain a barrier for many patients. Acoustic imaging is an alternative to lung function assessment that is non-hazardous, less costly, and has a patient-to-equipment approach. In this thesis, the suitability of acoustic imaging for lung health assessment is proven via systematic review and numerical airway modelling. An acoustic lung sound acquisition system, consisting of an optimal denoising filter translated into imaging for continual and reliable lung function assessment, is then developed.
To the authorâs best knowledge, locating obstructed airways via an acoustic lung model andthe resulting acoustic lung imaging have yet to be investigated in the open literature; hence,a novel acoustic lung spatial model was first developed in this research, which links acousticlung sounds and acoustic images with pathologic changes. About 89% structural similaritybetween an acoustic reference image based on actual lung sound and the developed modelacoustic image based on the computation of airway impedance was achieved.
External interference is inevitable in lung sound recordings; thus, an indirect unifying of wavelet-based total variation (WATV) and empirical Wiener denoising filter is proposed to enhance recorded lung sound signals. To the authorâs best knowledge, the integration of WATV and Wiener filters has not been investigated for lung sound signals. Selection and analysis of optimal parameters for the denoising filter were performed through a case study. The optimal parameters achieved through simulation studies led to an average 12.69 ± 5.05 dB improvement in signal-to-noise ratio (SNR), and the average SNR was improved by 16.92 ± 8.51 dB in the experimental studies. The hybrid denoising filter significantly enhances the signal quality of the captured lung sounds while preserving the characteristics of a lung sound signal and is less sensitive to the variation of SNR values of the input signal.
A robust system was developed based on the established lung spatial model and denoising filter through hardware redesign and signal processing, which outperformed commercial digital stethoscopes regarding SNR and root mean square error by about 8 dB and 0.15, respectively. Regarding sensing sensitivity power spectrum mapping, the developed system sensorsâ position is neutral, as opposed to digital stethoscopes, when representing lung signals, with a signal power loss ratio of around 5 dB compared to 10 dB from digital stethoscopes. The developed system obtains better detection by about 10% in the obstructed airway region compared to digital stethoscopes in the experimental studies
DIGITAL ANALYSIS OF CARDIAC ACOUSTIC SIGNALS IN CHILDREN
DIGITAL ANALYSIS OF CARDIAC ACOUSTIC SIGNALS IN CHILDREN Milad El-Segaier, MD Division of Paediatric Cardiology, Department of Paediatrics, Lund University Hospital, Lund, Sweden SUMMARY Despite tremendous development in cardiac imaging, use of the stethoscope and cardiac auscultation remains the primary diagnostic tool in evaluation of cardiac pathology. With the advent of miniaturized and powerful technology for data acquisition, display and digital signal processing, the possibilities for detecting cardiac pathology by signal analysis have increased. The objective of this study was to develop a simple, cost-effective diagnostic tool for analysis of cardiac acoustic signals. Heart sounds and murmurs were recorded in 360 children with a single-channel device and in 15 children with a multiple-channel device. Time intervals between acoustic signals were measured. Short-time Fourier transform (STFT) analysis was used to present the acoustic signals to a digital algorithm for detection of heart sounds, define systole and diastole and analyse the spectrum of a cardiac murmur. A statistical model for distinguishing physiological murmurs from pathological findings was developed using logistic regression analysis. The receiver operating characteristic (ROC) curve was used to evaluate the discriminating ability of the developed model. The sensitivities and specificities of the model were calculated at different cut-off points. Signal deconvolution using blind source separation (BSS) analysis was performed for separation of signals from different sources. The first and second heart sounds (S1 and S2) were detected with high accuracy (100% for the S1 and 97% for the S2) independently of heart rates and presence of a murmur. The systole and diastole were defined, but only systolic murmur was analysed in this work. The developed statistical model showed excellent prediction ability (area under the curve, AUC = 0.995) in distinguishing a physiological murmur from a pathological one with high sensitivity and specificity (98%). In further analyses deconvolution of the signals was successfully performed using blind separation analysis. This yielded two spatially independent sources, heart sounds (S1 and S2) in one component, and a murmur in another. The study supports the view that a cost-effective diagnostic device would be useful in primary health care. It would diminish the need for referring children with cardiac murmur to cardiac specialists and the load on the health care system. Likewise, it would help to minimize the psychological stress experienced by the children and their parents at an early stage of the medical care
Sleep Stage Classification: A Deep Learning Approach
Sleep occupies significant part of human life. The diagnoses of sleep related disorders are of great importance. To record specific physical and electrical activities of the brain and body, a multi-parameter test, called polysomnography (PSG), is normally used. The visual process of sleep stage classification is time consuming, subjective and costly. To improve the accuracy and efficiency of the sleep stage classification, automatic classification algorithms were developed.
In this research work, we focused on pre-processing (filtering boundaries and de-noising algorithms) and classification steps of automatic sleep stage classification. The main motivation for this work was to develop a pre-processing and classification framework to clean the input EEG signal without manipulating the original data thus enhancing the learning stage of deep learning classifiers.
For pre-processing EEG signals, a lossless adaptive artefact removal method was proposed. Rather than other works that used artificial noise, we used real EEG data contaminated with EOG and EMG for evaluating the proposed method. The proposed adaptive algorithm led to a significant enhancement in the overall classification accuracy. In the classification area, we evaluated the performance of the most common sleep stage classifiers using a comprehensive set of features extracted from PSG signals. Considering the challenges and limitations of conventional methods, we proposed two deep learning-based methods for classification of sleep stages based on Stacked Sparse AutoEncoder (SSAE) and Convolutional Neural Network (CNN). The proposed methods performed more efficiently by eliminating the need for conventional feature selection and feature extraction steps respectively. Moreover, although our systems were trained with lower number of samples compared to the similar studies, they were able to achieve state of art accuracy and higher overall sensitivity
Design Methodology of a New Wavelet Basis Function for Fetal Phonocardiographic Signals
Fetal phonocardiography (fPCG) based antenatal care system is economical and has a potential to use for long-term monitoring due to noninvasive nature of the system. The main limitation of this technique is that noise gets superimposed on the useful signal during its acquisition and transmission. Conventional filtering may result into loss of valuable diagnostic information from these signals. This calls for a robust, versatile, and adaptable denoising method applicable in different operative circumstances. In this work, a novel algorithm based on wavelet transform has been developed for denoising of fPCG signals. Successful implementation of wavelet theory in denoising is heavily dependent on selection of suitable wavelet basis function. This work introduces a new mother wavelet basis function for denoising of fPCG signals. The performance of newly developed wavelet is found to be better when compared with the existing wavelets. For this purpose, a two-channel filter bank, based on characteristics of fPCG signal, is designed. The resultant denoised fPCG signals retain the important diagnostic information contained in the original fPCG signal
- âŠ