26 research outputs found

    Turning Tangent Empirical Mode Decomposition: A Framework for Mono- and Multivariate Signals.

    No full text
    International audienceA novel Empirical Mode Decomposition (EMD) algorithm, called 2T-EMD, for both mono- and multivariate signals is proposed in this paper. It differs from the other approaches by its computational lightness and its algorithmic simplicity. The method is essentially based on a redefinition of the signal mean envelope, computed thanks to new characteristic points, which offers the possibility to decompose multivariate signals without any projection. The scope of application of the novel algorithm is specified, and a comparison of the 2T-EMD technique with classical methods is performed on various simulated mono- and multivariate signals. The monovariate behaviour of the proposed method on noisy signals is then validated by decomposing a fractional Gaussian noise and an application to real life EEG data is finally presented

    Multivariate Signal Denoising Based on Generic Multivariate Detrended Fluctuation Analysis

    Full text link
    We propose a generic multivariate extension of detrended fluctuation analysis (DFA) that incorporates interchannel dependencies within input multichannel data to perform its long-range correlation analysis. We next demonstrate the utility of the proposed method within multivariate signal denoising problem. Particularly, our denosing approach first obtains data driven multiscale signal representation via multivariate variational mode decomposition (MVMD) method. Then, proposed multivariate extension of DFA (MDFA) is used to reject the predominantly noisy modes based on their randomness scores. The denoised signal is reconstructed using the remaining multichannel modes albeit after removal of the noise traces using the principal component analysis (PCA). The utility of our denoising method is demonstrated on a wide range of synthetic and real life signals

    Employing data fusion & diversity in the applications of adaptive signal processing

    Get PDF
    The paradigm of adaptive signal processing is a simple yet powerful method for the class of system identification problems. The classical approaches consider standard one-dimensional signals whereby the model can be formulated by flat-view matrix/vector framework. Nevertheless, the rapidly increasing availability of large-scale multisensor/multinode measurement technology has render no longer sufficient the traditional way of representing the data. To this end, the author, who from this point onward shall be referred to as `we', `us', and `our' to signify the author myself and other supporting contributors i.e. my supervisor, my colleagues and other overseas academics specializing in the specific pieces of research endeavor throughout this thesis, has applied the adaptive filtering framework to problems that employ the techniques of data diversity and fusion which includes quaternions, tensors and graphs. At the first glance, all these structures share one common important feature: invertible isomorphism. In other words, they are algebraically one-to-one related in real vector space. Furthermore, it is our continual course of research that affords a segue of all these three data types. Firstly, we proposed novel quaternion-valued adaptive algorithms named the n-moment widely linear quaternion least mean squares (WL-QLMS) and c-moment WL-LMS. Both are as fast as the recursive-least-squares method but more numerically robust thanks to the lack of matrix inversion. Secondly, the adaptive filtering method is applied to a more complex task: the online tensor dictionary learning named online multilinear dictionary learning (OMDL). The OMDL is partly inspired by the derivation of the c-moment WL-LMS due to its parsimonious formulae. In addition, the sequential higher-order compressed sensing (HO-CS) is also developed to couple with the OMDL to maximally utilize the learned dictionary for the best possible compression. Lastly, we consider graph random processes which actually are multivariate random processes with spatiotemporal (or vertex-time) relationship. Similar to tensor dictionary, one of the main challenges in graph signal processing is sparsity constraint in the graph topology, a challenging issue for online methods. We introduced a novel splitting gradient projection into this adaptive graph filtering to successfully achieve sparse topology. Extensive experiments were conducted to support the analysis of all the algorithms proposed in this thesis, as well as pointing out potentials, limitations and as-yet-unaddressed issues in these research endeavor.Open Acces

    Data-driven time-frequency analysis of multivariate data

    No full text
    Empirical Mode Decomposition (EMD) is a data-driven method for the decomposition and time-frequency analysis of real world nonstationary signals. Its main advantages over other time-frequency methods are its locality, data-driven nature, multiresolution-based decomposition, higher time-frequency resolution and its ability to capture oscillation of any type (nonharmonic signals). These properties have made EMD a viable tool for real world nonstationary data analysis. Recent advances in sensor and data acquisition technologies have brought to light new classes of signals containing typically several data channels. Currently, such signals are almost invariably processed channel-wise, which is suboptimal. It is, therefore, imperative to design multivariate extensions of the existing nonlinear and nonstationary analysis algorithms as they are expected to give more insight into the dynamics and the interdependence between multiple channels of such signals. To this end, this thesis presents multivariate extensions of the empirical mode de- composition algorithm and illustrates their advantages with regards to multivariate non- stationary data analysis. Some important properties of such extensions are also explored, including their ability to exhibit wavelet-like dyadic filter bank structures for white Gaussian noise (WGN), and their capacity to align similar oscillatory modes from multiple data channels. Owing to the generality of the proposed methods, an improved multi- variate EMD-based algorithm is introduced which solves some inherent problems in the original EMD algorithm. Finally, to demonstrate the potential of the proposed methods, simulations on the fusion of multiple real world signals (wind, images and inertial body motion data) support the analysis

    Quaternion singular spectrum analysis of electroencephalogram With application in sleep analysis

    Get PDF
    A novel quaternion-valued singular spectrum analysis (SSA) is introduced for multichannel analysis of electroencephalogram (EEG). The analysis of EEG typically requires the decomposition of data channels into meaningful components despite the notoriously noisy nature of EEG - which is the aim of SSA. However, the singular value decomposition involved in SSA implies the strict orthogonality of the decomposed components, which may not reflect accurately the sources which exhibit similar neural activities. To allow for the modelling of such co-channel coupling, the quaternion domain is considered for the first time to formulate the SSA using the augmented statistics. As an application, we demonstrate how the augmented quaternion-valued SSA (AQSSA) can be used to extract the sources, even at a signal-to-noise ratio as low as -10 dB. To illustrate the usefulness of our quaternion-valued SSA in a rehabilitation setting, we employ the proposed SSA for sleep analysis to extract statistical descriptors for five-stage classification (Awake, N1, N2, N3 and REM). The level of agreement using these descriptors was 74% as quantified by the Cohen's kappa

    Einstein-Podolsky-Rosen-Bohm experiments: a discrete data driven approach

    Full text link
    We take the point of view that building a one-way bridge from experimental data to mathematical models instead of the other way around avoids running into controversies resulting from attaching meaning to the symbols used in the latter. In particular, we show that adopting this view offers new perspectives for constructing mathematical models for and interpreting the results of Einstein-Podolsky-Rosen-Bohm experiments. We first prove new Bell-type inequalities constraining the values of the four correlations obtained by performing Einstein-Podolsky-Rosen-Bohm experiments under four different conditions. The proof is ``model-free'' in the sense that it does not refer to any mathematical model that one imagines to have produced the data. The constraints only depend on the number of quadruples obtained by reshuffling the data in the four data sets without changing the values of the correlations. These new inequalities reduce to model-free versions of the well-known Bell-type inequalities if the maximum fraction of quadruples is equal to one. Being model-free, a violation of the latter by experimental data implies that not all the data in the four data sets can be reshuffled to form quadruples. Furthermore, being model-free inequalities, a violation of the latter by experimental data only implies that any mathematical model assumed to produce this data does not apply. Starting from the data obtained by performing Einstein-Podolsky-Rosen-Bohm experiments, we construct instead of postulate mathematical models that describe the main features of these data. The mathematical framework of plausible reasoning is applied to reproducible and robust data, yielding without using any concept of quantum theory, the expression of the correlation for a system of two spin-1/2 objects in the singlet state. (truncated here

    Multivariate multiscale complexity analysis

    No full text
    Established dynamical complexity analysis measures operate at a single scale and thus fail to quantify inherent long-range correlations in real world data, a key feature of complex systems. They are designed for scalar time series, however, multivariate observations are common in modern real world scenarios and their simultaneous analysis is a prerequisite for the understanding of the underlying signal generating model. To that end, this thesis first introduces a notion of multivariate sample entropy and thus extends the current univariate complexity analysis to the multivariate case. The proposed multivariate multiscale entropy (MMSE) algorithm is shown to be capable of addressing the dynamical complexity of such data directly in the domain where they reside, and at multiple temporal scales, thus making full use of all the available information, both within and across the multiple data channels. Next, the intrinsic multivariate scales of the input data are generated adaptively via the multivariate empirical mode decomposition (MEMD) algorithm. This allows for both generating comparable scales from multiple data channels, and for temporal scales of same length as the length of input signal, thus, removing the critical limitation on input data length in current complexity analysis methods. The resulting MEMD-enhanced MMSE method is also shown to be suitable for non-stationary multivariate data analysis owing to the data-driven nature of MEMD algorithm, as non-stationarity is the biggest obstacle for meaningful complexity analysis. This thesis presents a quantum step forward in this area, by introducing robust and physically meaningful complexity estimates of real-world systems, which are typically multivariate, finite in duration, and of noisy and heterogeneous natures. This also allows us to gain better understanding of the complexity of the underlying multivariate model and more degrees of freedom and rigor in the analysis. Simulations on both synthetic and real world multivariate data sets support the analysis

    Latent variable regression and applications to planetary seismic instrumentation

    Get PDF
    The work presented in this thesis is framed by the concept of latent variables, a modern data analytics approach. A latent variable represents an extracted component from a dataset which is not directly measured. The concept is first applied to combat the problem of ill-posed regression through the promising method of partial least squares (PLS). In this context the latent variables within a data matrix are extracted through an iterative algorithm based on cross-covariance as an optimisation criterion. This work first extends the PLS algorithm, using adaptive and recursive techniques, for online, non-stationary data applications. The standard PLS algorithm is further generalised for complex-, quaternion- and tensor-valued data. In doing so it is shown that the multidimensional algebras facilitate physically meaningful representations, demonstrated through smart-grid frequency estimation and image-classification tasks. The second part of the thesis uses this knowledge to inform a performance analysis of the MEMS microseismometer implemented for the InSight mission to Mars. This is given in terms of the sensor's intrinsic self-noise, the estimation of which is achieved from experimental data with a colocated instrument. The standard coherence and proposed delta noise estimators are analysed with respect to practical issues. The implementation of algorithms for the alignment, calibration and post-processing of the data then enabled a definitive self-noise estimate, validated from data acquired in ultra-quiet, deep-space environment. A method for the decorrelation of the microseismometer's output from its thermal response is proposed. To do so a novel sensor fusion approach based on the Kalman filter is developed for a full-band transfer-function correction, in contrast to the traditional ill-posed frequency division method. This algorithm was applied to experimental data which determined the thermal model coefficients while validating the sensor's performance at tidal frequencies 1E-5Hz and in extreme environments at -65C. This thesis, therefore, provides a definitive view of the latent variables perspective. This is achieved through the general algorithms developed for regression with multidimensional data and the bespoke application to seismic instrumentation.Open Acces

    ESSAYS ON ESTIMATION OF NON-LINEAR STATE-SPACE MODELS

    Get PDF
    The first chapter of my thesis (co-authored with David N. DeJong, Jean-Francois Richard and Roman Liesenfeld) develops a numerical procedure that facilitates efficient likelihood evaluation and filtering in applications involving non-linear and non-Gaussian state-space models. These tasks require the calculation of integrals over unobservable state variables. We introduce an efficient procedure for calculating such integrals: the EIS-Filter. The procedure approximates necessary integrals using continuous approximations of target densities. Construction is achieved via efficient importance sampling, and approximating densities are adapted to fully incorporate current information. Extensive comparisons to the standard particle filter are presented using four diverse examples.The second chapter illustrates the use of copulas to create low-dimensional multivariate importance sampling densities. Copulas enable the problem of multivariate density approximation to be split into a sequence of simpler univariate density approximation problems for the marginals, with the dependence accounted by the copula parameter(s). This separation of the marginals from their dependence allows maximum flexibility in the selection of marginal densities. Combined with the EIS method for refining importance sampling densities, copula densities offer substantial flexibility in creating multivariate importance samplers. In a simulation exercise, we compare the accuracy of the copula-based EIS-Filter to the particle filter in evaluating the likelihood function and in obtaining filtered estimates of the latent variables.Reliability of growth forecasts critically depend on being able to anticipate/recognize shifts of the economy from recessions to expansions or vice versa. It is widely accepted that the processes that govern these shifts could be highly non-linear. In the third chapter (co-authored with David N. DeJong, Jean-Francois Richard and Roman Liesenfeld), we study regime shifts using a non-linear model of GDP growth. The model characterizes growth as following non-linear trajectories that fluctuate stochastically between alternative periods of general acceleration and deceleration. Also, we introduce a non-stochastic rule-based recession-dating method to forecast likely dates for the start of a recession and it length. Results indicate that the model is capable of exhibiting substantially non-linear behavior in its regime-specific latent process and hence is able to anticipate and detect regime-shifts accurately, improving the quality of growth forecasts obtained from it
    corecore