117 research outputs found

    Multivariate Signal Denoising Based on Generic Multivariate Detrended Fluctuation Analysis

    Full text link
    We propose a generic multivariate extension of detrended fluctuation analysis (DFA) that incorporates interchannel dependencies within input multichannel data to perform its long-range correlation analysis. We next demonstrate the utility of the proposed method within multivariate signal denoising problem. Particularly, our denosing approach first obtains data driven multiscale signal representation via multivariate variational mode decomposition (MVMD) method. Then, proposed multivariate extension of DFA (MDFA) is used to reject the predominantly noisy modes based on their randomness scores. The denoised signal is reconstructed using the remaining multichannel modes albeit after removal of the noise traces using the principal component analysis (PCA). The utility of our denoising method is demonstrated on a wide range of synthetic and real life signals

    Adaptive noise suppression for low-S/N microseismic data based on ambient-noise-assisted multivariate empirical mode decomposition

    Get PDF
    Microseismic monitoring data may be seriously contaminated by complex and nonstationary interference noises produced by mechanical vibration, which significantly impact the data quality and subsequent data-processing procedure. One challenge in microseismic data processing is separating weak seismic signals from varying noisy data. To address this issue, we proposed an ambient-noise-assisted multivariate empirical mode decomposition (ANA-MEMD) method for adaptively suppressing noise in low signal-to-noise (S/N) microseismic data. In the proposed method, a new multi-channel record is produced by combining the noisy microseismic signal with preceding ambient noises. The multi-channel record is then decomposed using multivariate empirical mode decomposition (MEMD) into multivariate intrinsic mode functions (MIMFs). Then, the MIMFs corresponding to the main ambient noises can be identified by calculating and sorting energy percentage in descending order. Finally, the IMFs associated with strong interference noise, high-frequency and low-frequency noise are filtered out and suppressed by the energy percentage and frequency range. We investigate the feasibility and reliability of the proposed method using both synthetic data and field data. The results demonstrate that the proposed method can mitigate the mode mixing problem and clarify the main noise contributors by adding additional ambient-noise-assisted channels, hence separating the microseismic signal and ambient noise effectively and enhancing the S/Ns of microseismic signals

    Artifact Removal Methods in EEG Recordings: A Review

    Get PDF
    To obtain the correct analysis of electroencephalogram (EEG) signals, non-physiological and physiological artifacts should be removed from EEG signals. This study aims to give an overview on the existing methodology for removing physiological artifacts, e.g., ocular, cardiac, and muscle artifacts. The datasets, simulation platforms, and performance measures of artifact removal methods in previous related research are summarized. The advantages and disadvantages of each technique are discussed, including regression method, filtering method, blind source separation (BSS), wavelet transform (WT), empirical mode decomposition (EMD), singular spectrum analysis (SSA), and independent vector analysis (IVA). Also, the applications of hybrid approaches are presented, including discrete wavelet transform - adaptive filtering method (DWT-AFM), DWT-BSS, EMD-BSS, singular spectrum analysis - adaptive noise canceler (SSA-ANC), SSA-BSS, and EMD-IVA. Finally, a comparative analysis for these existing methods is provided based on their performance and merits. The result shows that hybrid methods can remove the artifacts more effectively than individual methods

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    A systematic review on artifact removal and classification techniques for enhanced MEG-based BCI systems

    Get PDF
    Neurological disease victims may be completely paralyzed and unable to move, but they may still be able to think. Their brain activity is the only means by which they can interact with their environment. Brain-Computer Interface (BCI) research attempts to create tools that support subjects with disabilities. Furthermore, BCI research has expanded rapidly over the past few decades as a result of the interest in creating a new kind of human-to-machine communication. As magnetoencephalography (MEG) has superior spatial and temporal resolution than other approaches, it is being utilized to measure brain activity non-invasively. The recorded signal includes signals related to brain activity as well as noise and artifacts from numerous sources. MEG can have a low signal-to-noise ratio because the magnetic fields generated by cortical activity are small compared to other artifacts and noise. By using the right techniques for noise and artifact detection and removal, the signal-to-noise ratio can be increased. This article analyses various methods for removing artifacts as well as classification strategies. Additionally, this offers a study of the influence of Deep Learning models on the BCI system. Furthermore, the various challenges in collecting and analyzing MEG signals as well as possible study fields in MEG-based BCI are examined

    On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps

    Get PDF
    Electroencephalography (EEG) signals can be analyzed in the temporal, spatial, or frequency domains. Noise and artifacts during the data acquisition phase contaminate these signals adding difficulties in their analysis. Techniques such as Independent Component Analysis (ICA) require human intervention to remove noise and artifacts. Autoencoders have automatized artifact detection and removal by representing inputs in a lower dimensional latent space. However, little research is devoted to understanding the minimum dimension of such latent space that allows meaningful input reconstruction. Person-specific convolutional autoencoders are designed by manipulating the size of their latent space. A sliding window technique with overlapping is employed to segment varied-sized windows. Five topographic head-maps are formed in the frequency domain for each window. The latent space of autoencoders is assessed using the input reconstruction capacity and classification utility. Findings indicate that the minimal latent space dimension is 25% of the size of the topographic maps for achieving maximum reconstruction capacity and maximizing classification accuracy, which is achieved with a window length of at least 1 s and a shift of 125 ms, using the 128 Hz sampling rate. This research contributes to the body of knowledge with an architectural pipeline for eliminating redundant EEG data while preserving relevant features with deep autoencoders

    Informationstheorie basierte Hochenergiephotonenbildgebung

    Get PDF

    The SURE-LET approach to image denoising

    Get PDF
    Denoising is an essential step prior to any higher-level image-processing tasks such as segmentation or object tracking, because the undesirable corruption by noise is inherent to any physical acquisition device. When the measurements are performed by photosensors, one usually distinguish between two main regimes: in the first scenario, the measured intensities are sufficiently high and the noise is assumed to be signal-independent. In the second scenario, only few photons are detected, which leads to a strong signal-dependent degradation. When the noise is considered as signal-independent, it is often modeled as an additive independent (typically Gaussian) random variable, whereas, otherwise, the measurements are commonly assumed to follow independent Poisson laws, whose underlying intensities are the unknown noise-free measures. We first consider the reduction of additive white Gaussian noise (AWGN). Contrary to most existing denoising algorithms, our approach does not require an explicit prior statistical modeling of the unknown data. Our driving principle is the minimization of a purely data-adaptive unbiased estimate of the mean-squared error (MSE) between the processed and the noise-free data. In the AWGN case, such a MSE estimate was first proposed by Stein, and is known as "Stein's unbiased risk estimate" (SURE). We further develop the original SURE theory and propose a general methodology for fast and efficient multidimensional image denoising, which we call the SURE-LET approach. While SURE allows the quantitative monitoring of the denoising quality, the flexibility and the low computational complexity of our approach are ensured by a linear parameterization of the denoising process, expressed as a linear expansion of thresholds (LET).We propose several pointwise, multivariate, and multichannel thresholding functions applied to arbitrary (in particular, redundant) linear transformations of the input data, with a special focus on multiscale signal representations. We then transpose the SURE-LET approach to the estimation of Poisson intensities degraded by AWGN. The signal-dependent specificity of the Poisson statistics leads to the derivation of a new unbiased MSE estimate that we call "Poisson's unbiased risk estimate" (PURE) and requires more adaptive transform-domain thresholding rules. In a general PURE-LET framework, we first devise a fast interscale thresholding method restricted to the use of the (unnormalized) Haar wavelet transform. We then lift this restriction and show how the PURE-LET strategy can be used to design and optimize a wide class of nonlinear processing applied in an arbitrary (in particular, redundant) transform domain. We finally apply some of the proposed denoising algorithms to real multidimensional fluorescence microscopy images. Such in vivo imaging modality often operates under low-illumination conditions and short exposure time; consequently, the random fluctuations of the measured fluorophore radiations are well described by a Poisson process degraded (or not) by AWGN. We validate experimentally this statistical measurement model, and we assess the performance of the PURE-LET algorithms in comparison with some state-of-the-art denoising methods. Our solution turns out to be very competitive both qualitatively and computationally, allowing for a fast and efficient denoising of the huge volumes of data that are nowadays routinely produced in biomedical imaging
    • …
    corecore