149 research outputs found

    Time-varying nonlinear causality detection using regularized orthogonal least squares and multi-wavelets with applications to EEG

    Get PDF
    A new transient Granger causality detection method is proposed based on a time-varying parametric modelling framework, and is applied to real EEG signals to reveal the causal information flow during motor imagery (MI) tasks. The time-varying parametric modelling approach employs a nonlinear autoregressive with external input (NARX) model, whose parameters are approximated by a set of multiwavelet basis functions. A regularized orthogonal least squares (ROLS) algorithm is then used to produce a parsimonious or sparse regression model and estimate the associated model parameters. The time-varying Granger causality between nonstationary signals can be detected accurately by making use of both the good approximation properties of multi-wavelets and the good generalization performance of the ROLS in the presence of high-level noise. Two simulation examples are presented to demonstrate the effectiveness of the proposed method for both linear and nonlinear causal detection respectively. The proposed method is then applied to real EEG signals of MI tasks. It follows that transient causal information flow over the time course between various sensorimotor related channels can be successfully revealed during the whole reaction processes. Experiment results from these case studies confirm the applicability of the proposed scheme and show its utility for the understanding of the associated neural mechanism and the potential significance for developing MI tasks based brain-computer interface (BCI) systems

    A parametric time frequency-conditional Granger causality method using ultra-regularized orthogonal least squares and multiwavelets for dynamic connectivity analysis in EEGs

    Get PDF
    Objective: This study proposes a new para-metric TF-CGC (time-frequency conditional Granger causality) method for high-precision connectivity analysis over time and frequency domain in multivariate coupling nonstationary systems, and applies it to source EEG signals to reveal dynamic interaction patterns in oscillatory neo-cortical sensorimotor networks. Methods: The Geweke's spectral measure is combined with the TVARX (time-varying autoregressive with exogenous input) model-ling approach, which uses multiwavelet-based ul-tra-regularized orthogonal least squares (UROLS) algo-rithm aided by APRESS (adjustable prediction error sum of squares), to obtain high-resolution time-varying CGC representations. The UROLS-APRESS algorithm, which adopts both the regularization technique and the ultra-least squares criterion to measure not only the signal themselves but also the weak derivatives of them, is a novel powerful method in constructing time-varying models with good generalization performance, and can accurately track smooth and fast changing causalities. The generalized measurement based on CGC decomposition is able to eliminate indirect influences in multivariate systems. Re-sults: The proposed method is validated on two simulations and then applied to source level motor imagery (MI) EEGs, where the predicted distributions are well recovered with high TF precision, and the detected connectivity patterns of MI-EEGs are physiologically interpretable and yield new insights into the dynamical organization of oscillatory cor-tical networks. Conclusion: Experimental results confirm the effectiveness of the TF-CGC method in tracking rapidly varying causalities of EEG-based oscillatory networks. Significance: The novel TF-CGC method is expected to provide important information of neural mechanisms of perception and cognition

    A multiple beta wavelet-based locally regularized ultraorthogonal forward regression algorithm for time-varying system identification with applications to EEG

    Get PDF
    Time-varying (TV) nonlinear systems widely exist in various fields of engineering and science. Effective identification and modeling of TV systems is a challenging problem due to the nonstationarity and nonlinearity of the associated processes. In this paper, a novel parametric modeling algorithm is proposed to deal with this problem based on a TV nonlinear autoregressive with exogenous input (TV-NARX) model. A new class of multiple beta wavelet (MBW) basis functions is introduced to represent the TV coefficients of the TV-NARX model to enable the tracking of both smooth trends and sharp changes of the system behavior. To produce a parsimonious model structure, a locally regularized ultraorthogonal forward regression (LRUOFR) algorithm aided by the adjustable prediction error sum of squares (APRESS) criterion is investigated for sparse model term selection and parameter estimation. Simulation studies and a real application to EEG data show that the proposed MBW-LRUOFR algorithm can effectively capture the global and local features of nonstationary systems and obtain an optimal model, even for signals contaminated with severe colored noise

    Boosting wavelet neural networks using evolutionary algorithms for short-term wind speed time series forecasting

    Get PDF
    This paper addresses nonlinear time series modelling and prediction problem using a type of wavelet neural networks. The basic building block of the neural network models is a ridge type function. The training of such a network is a nonlinear optimization problem. Evolutionary algorithms (EAs), including genetic algorithm (GA) and particle swarm optimization (PSO), together with a new gradient-free algorithm (called coordinate dictionary search optimization – CDSO), are used to train network models. An example for real speed wind data modelling and prediction is provided to show the performance of the proposed networks trained by these three optimization algorithms

    Neural activity inspired asymmetric basis function TV-NARX model for the identification of time-varying dynamic systems

    Get PDF
    Inspired by the unique neuronal activities, a new time-varying nonlinear autoregressive with exogenous input (TV-NARX) model is proposed for modelling nonstationary processes. The NARX nonlinear process mimics the action potential initiation and the time-varying parameters are approximated with a series of postsynaptic current like asymmetric basis functions to mimic the ion channels of the inter-neuron propagation. In the model, the time-varying parameters of the process terms are sparsely represented as the superposition of a series of asymmetric alpha basis functions in an over-complete frame. Combining the alpha basis functions with the model process terms, the system identification of the TV-NARX model from observed input and output can equivalently be treated as the system identification of a corresponding time-invariant system. The locally regularised orthogonal forward regression (LROFR) algorithm is then employed to detect the sparse model structure and estimate the associated coefficients. The excellent performance in both numerical studies and modelling of real physiological signals showed that the TV-NARX model with asymmetric basis function is more powerful and efficient in tracking both smooth trends and capturing the abrupt changes in the time-varying parameters than its symmetric counterparts

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Bayesian Modeling and Estimation Techniques for the Analysis of Neuroimaging Data

    Get PDF
    Brain function is hallmarked by its adaptivity and robustness, arising from underlying neural activity that admits well-structured representations in the temporal, spatial, or spectral domains. While neuroimaging techniques such as Electroencephalography (EEG) and magnetoencephalography (MEG) can record rapid neural dynamics at high temporal resolutions, they face several signal processing challenges that hinder their full utilization in capturing these characteristics of neural activity. The objective of this dissertation is to devise statistical modeling and estimation methodologies that account for the dynamic and structured representations of neural activity and to demonstrate their utility in application to experimentally-recorded data. The first part of this dissertation concerns spectral analysis of neural data. In order to capture the non-stationarities involved in neural oscillations, we integrate multitaper spectral analysis and state-space modeling in a Bayesian estimation setting. We also present a multitaper spectral analysis method tailored for spike trains that captures the non-linearities involved in neuronal spiking. We apply our proposed algorithms to both EEG and spike recordings, which reveal significant gains in spectral resolution and noise reduction. In the second part, we investigate cortical encoding of speech as manifested in MEG responses. These responses are often modeled via a linear filter, referred to as the temporal response function (TRF). While the TRFs estimated from the sensor-level MEG data have been widely studied, their cortical origins are not fully understood. We define the new notion of Neuro-Current Response Functions (NCRFs) for simultaneously determining the TRFs and their cortical distribution. We develop an efficient algorithm for NCRF estimation and apply it to MEG data, which provides new insights into the cortical dynamics underlying speech processing. Finally, in the third part, we consider the inference of Granger causal (GC) influences in high-dimensional time series models with sparse coupling. We consider a canonical sparse bivariate autoregressive model and define a new statistic for inferring GC influences, which we refer to as the LASSO-based Granger Causal (LGC) statistic. We establish non-asymptotic guarantees for robust identification of GC influences via the LGC statistic. Applications to simulated and real data demonstrate the utility of the LGC statistic in robust GC identification
    • …
    corecore