9,391 research outputs found

    Estimating Time-Varying Effective Connectivity in High-Dimensional fMRI Data Using Regime-Switching Factor Models

    Full text link
    Recent studies on analyzing dynamic brain connectivity rely on sliding-window analysis or time-varying coefficient models which are unable to capture both smooth and abrupt changes simultaneously. Emerging evidence suggests state-related changes in brain connectivity where dependence structure alternates between a finite number of latent states or regimes. Another challenge is inference of full-brain networks with large number of nodes. We employ a Markov-switching dynamic factor model in which the state-driven time-varying connectivity regimes of high-dimensional fMRI data are characterized by lower-dimensional common latent factors, following a regime-switching process. It enables a reliable, data-adaptive estimation of change-points of connectivity regimes and the massive dependencies associated with each regime. We consider the switching VAR to quantity the dynamic effective connectivity. We propose a three-step estimation procedure: (1) extracting the factors using principal component analysis (PCA) and (2) identifying dynamic connectivity states using the factor-based switching vector autoregressive (VAR) models in a state-space formulation using Kalman filter and expectation-maximization (EM) algorithm, and (3) constructing the high-dimensional connectivity metrics for each state based on subspace estimates. Simulation results show that our proposed estimator outperforms the K-means clustering of time-windowed coefficients, providing more accurate estimation of regime dynamics and connectivity metrics in high-dimensional settings. Applications to analyzing resting-state fMRI data identify dynamic changes in brain states during rest, and reveal distinct directed connectivity patterns and modular organization in resting-state networks across different states.Comment: 21 page

    A Novel Predictive-Coding-Inspired Variational RNN Model for Online Prediction and Recognition

    Get PDF
    This study introduces PV-RNN, a novel variational RNN inspired by the predictive-coding ideas. The model learns to extract the probabilistic structures hidden in fluctuating temporal patterns by dynamically changing the stochasticity of its latent states. Its architecture attempts to address two major concerns of variational Bayes RNNs: how can latent variables learn meaningful representations and how can the inference model transfer future observations to the latent variables. PV-RNN does both by introducing adaptive vectors mirroring the training data, whose values can then be adapted differently during evaluation. Moreover, prediction errors during backpropagation, rather than external inputs during the forward computation, are used to convey information to the network about the external data. For testing, we introduce error regression for predicting unseen sequences as inspired by predictive coding that leverages those mechanisms. The model introduces a weighting parameter, the meta-prior, to balance the optimization pressure placed on two terms of a lower bound on the marginal likelihood of the sequential data. We test the model on two datasets with probabilistic structures and show that with high values of the meta-prior the network develops deterministic chaos through which the data's randomness is imitated. For low values, the model behaves as a random process. The network performs best on intermediate values, and is able to capture the latent probabilistic structure with good generalization. Analyzing the meta-prior's impact on the network allows to precisely study the theoretical value and practical benefits of incorporating stochastic dynamics in our model. We demonstrate better prediction performance on a robot imitation task with our model using error regression compared to a standard variational Bayes model lacking such a procedure.Comment: The paper is accepted in Neural Computatio
    corecore