2,015 research outputs found

    Multiscale Bayesian State Space Model for Granger Causality Analysis of Brain Signal

    Full text link
    Modelling time-varying and frequency-specific relationships between two brain signals is becoming an essential methodological tool to answer heoretical questions in experimental neuroscience. In this article, we propose to estimate a frequency Granger causality statistic that may vary in time in order to evaluate the functional connections between two brain regions during a task. We use for that purpose an adaptive Kalman filter type of estimator of a linear Gaussian vector autoregressive model with coefficients evolving over time. The estimation procedure is achieved through variational Bayesian approximation and is extended for multiple trials. This Bayesian State Space (BSS) model provides a dynamical Granger-causality statistic that is quite natural. We propose to extend the BSS model to include the \`{a} trous Haar decomposition. This wavelet-based forecasting method is based on a multiscale resolution decomposition of the signal using the redundant \`{a} trous wavelet transform and allows us to capture short- and long-range dependencies between signals. Equally importantly it allows us to derive the desired dynamical and frequency-specific Granger-causality statistic. The application of these models to intracranial local field potential data recorded during a psychological experimental task shows the complex frequency based cross-talk between amygdala and medial orbito-frontal cortex. Keywords: \`{a} trous Haar wavelets; Multiple trials; Neuroscience data; Nonstationarity; Time-frequency; Variational methods The published version of this article is Cekic, S., Grandjean, D., Renaud, O. (2018). Multiscale Bayesian state-space model for Granger causality analysis of brain signal. Journal of Applied Statistics. https://doi.org/10.1080/02664763.2018.145581

    Probabilistic Methodology and Techniques for Artefact Conception and Development

    Get PDF
    The purpose of this paper is to make a state of the art on probabilistic methodology and techniques for artefact conception and development. It is the 8th deliverable of the BIBA (Bayesian Inspired Brain and Artefacts) project. We first present the incompletness problem as the central difficulty that both living creatures and artefacts have to face: how can they perceive, infer, decide and act efficiently with incomplete and uncertain knowledge?. We then introduce a generic probabilistic formalism called Bayesian Programming. This formalism is then used to review the main probabilistic methodology and techniques. This review is organized in 3 parts: first the probabilistic models from Bayesian networks to Kalman filters and from sensor fusion to CAD systems, second the inference techniques and finally the learning and model acquisition and comparison methodologies. We conclude with the perspectives of the BIBA project as they rise from this state of the art

    A Bayesian perspective on classical control

    Full text link
    The connections between optimal control and Bayesian inference have long been recognised, with the field of stochastic (optimal) control combining these frameworks for the solution of partially observable control problems. In particular, for the linear case with quadratic functions and Gaussian noise, stochastic control has shown remarkable results in different fields, including robotics, reinforcement learning and neuroscience, especially thanks to the established duality of estimation and control processes. Following this idea we recently introduced a formulation of PID control, one of the most popular methods from classical control, based on active inference, a theory with roots in variational Bayesian methods, and applications in the biological and neural sciences. In this work, we highlight the advantages of our previous formulation and introduce new and more general ways to tackle some existing problems in current controller design procedures. In particular, we consider 1) a gradient-based tuning rule for the parameters (or gains) of a PID controller, 2) an implementation of multiple degrees of freedom for independent responses to different types of signals (e.g., two-degree-of-freedom PID), and 3) a novel time-domain formalisation of the performance-robustness trade-off in terms of tunable constraints (i.e., priors in a Bayesian model) of a single cost functional, variational free energy.Comment: 8 pages, Accepted at IJCNN 202

    Sparse online variational Bayesian regression

    Full text link
    This work considers variational Bayesian inference as an inexpensive and scalable alternative to a fully Bayesian approach in the context of sparsity-promoting priors. In particular, the priors considered arise from scale mixtures of Normal distributions with a generalized inverse Gaussian mixing distribution. This includes the variational Bayesian LASSO as an inexpensive and scalable alternative to the Bayesian LASSO introduced in [65]. It also includes a family of priors which more strongly promote sparsity. For linear models the method requires only the iterative solution of deterministic least squares problems. Furthermore, for p unknown covariates the method can be implemented exactly online with a cost of O(p3)O(p^3) in computation and O(p2)O(p^2) in memory per iteration -- in other words, the cost per iteration is independent of n, and in principle infinite data can be considered. For large pp an approximation is able to achieve promising results for a cost of O(p)O(p) per iteration, in both computation and memory. Strategies for hyper-parameter tuning are also considered. The method is implemented for real and simulated data. It is shown that the performance in terms of variable selection and uncertainty quantification of the variational Bayesian LASSO can be comparable to the Bayesian LASSO for problems which are tractable with that method, and for a fraction of the cost. The present method comfortably handles n=65536n = 65536, p=131073p = 131073 on a laptop in less than 30 minutes, and n=105n = 10^5, p=2.1×106p = 2.1 \times 10^6 overnight
    • …
    corecore