40 research outputs found

    A gradient-like variational Bayesian algorithm

    Get PDF
    International audienceIn this paper we provide a new algorithm allowing to solve a variational Bayesian issue which can be seen as a functional optimization problem. The main contribution of this paper is to transpose a classical iterative algorithm of optimization in the metric space of probability densities involved in the Bayesian methodology. Another important part is the application of our algorithm to a class of linear inverse problems where estimated quantities are assumed to be sparse. Finally, we compare performances of our method with classical ones on a tomographic problem. Preliminary results on a small dimensional example show that our new algorithm is faster than the classical approaches for the same quality of reconstruction

    Estimating anisotropy directly via neural timeseries

    Full text link
    An isotropic dynamical system is one that looks the same in every direction, i.e., if we imagine standing somewhere within an isotropic system, we would not be able to differentiate between different lines of sight. Conversely, anisotropy is a measure of the extent to which a system deviates from perfect isotropy, with larger values indicating greater discrepancies between the structure of the system along its axes. Here, we derive the form of a generalised scalable (mechanically similar) discretized field theoretic Lagrangian that allows for levels of anisotropy to be directly estimated via timeseries of arbitrary dimensionality. We generate synthetic data for both isotropic and anisotropic systems and, by using Bayesian model inversion and reduction, show that we can discriminate between the two datasets - thereby demonstrating proof of principle. We then apply this methodology to murine calcium imaging data collected in rest and task states, showing that anisotropy can be estimated directly from different brain states and cortical regions in an empirical in vivo biological setting. We hope that this theoretical foundation, together with the methodology and publicly available MATLAB code, will provide an accessible way for researchers to obtain new insight into the structural organization of neural systems in terms of how scalable neural regions grow - both ontogenetically during the development of an individual organism, as well as phylogenetically across species. Keywords: Anisotropy; DCM; Data fitting; Field theory; Lagrangian; Neuroimagin

    Variational Bayes Phase Tracking for Correlated Dual-Frequency Measurements with Slow Dynamics

    Get PDF
    We consider the problem of estimating the absolute phase of a noisy signal when this latter consists of correlated dual-frequency measurements. This scenario may arise in many application areas such as global navigation satellite system (GNSS). In this paper, we assume a slow varying phase and propose accordingly a Bayesian filtering technique that makes use of the frequency diversity. More specifically, the method results from a variational Bayes approximation and belongs to the class of nonlinear filters. Numerical simulations are performed to assess the performance of the tracking technique especially in terms of mean square error and cycle-slip rate. Comparison with a more conventional approach, namely a Gaussian sum estimator, shows substantial improvements when the signal-to-noise ratio and/or the correlation of the measurements are low

    Tracking slow modulations in synaptic gain using dynamic causal modelling : validation in epilepsy

    Get PDF
    In thiswork we propose a proof of principle that dynamic causal modelling can identify plausible mechanisms at the synaptic level underlying brain state changes over a timescale of seconds. As a benchmark example for validation we used intracranial electroencephalographic signals in a human subject. These data were used to infer the (effective connectivity) architecture of synaptic connections among neural populations assumed to generate seizure activity. Dynamic causal modelling allowed us to quantify empirical changes in spectral activity in terms of a trajectory in parameter space -identifying key synaptic parameters or connections that cause observed signals. Using recordings from three seizures in one patient, we considered a network of two sources (within and just outside the putative ictal zone). Bayesian model selection was used to identify the intrinsic (within-source) and extrinsic (between-source) connectivity. Having established the underlying architecture, we were able to track the evolution of key connectivity parameters (e.g., inhibitory connections to superficial pyramidal cells) and test specific hypotheses about the synaptic mechanisms involved in ictogenesis. Our key finding was that intrinsic synaptic changes were sufficient to explain seizure onset, where these changes showed dissociable time courses over several seconds. Crucially, these changes spoke to an increase in the sensitivity of principal cells to intrinsic inhibitory afferents and a transient loss of excitatory-inhibitory balance

    Multi-level Gated Bayesian Recurrent Neural Network for State Estimation

    Full text link
    The optimality of Bayesian filtering relies on the completeness of prior models, while deep learning holds a distinct advantage in learning models from offline data. Nevertheless, the current fusion of these two methodologies remains largely ad hoc, lacking a theoretical foundation. This paper presents a novel solution, namely a multi-level gated Bayesian recurrent neural network specifically designed to state estimation under model mismatches. Firstly, we transform the non-Markov state-space model into an equivalent first-order Markov model with memory. It is a generalized transformation that overcomes the limitations of the first-order Markov property and enables recursive filtering. Secondly, by deriving a data-assisted joint state-memory-mismatch Bayesian filtering, we design a Bayesian multi-level gated framework that includes a memory update gate for capturing the temporal regularities in state evolution, a state prediction gate with the evolution mismatch compensation, and a state update gate with the observation mismatch compensation. The Gaussian approximation implementation of the filtering process within the gated framework is derived, taking into account the computational efficiency. Finally, the corresponding internal neural network structures and end-to-end training methods are designed. The Bayesian filtering theory enhances the interpretability of the proposed gated network, enabling the effective integration of offline data and prior models within functionally explicit gated units. In comprehensive experiments, including simulations and real-world datasets, the proposed gated network demonstrates superior estimation performance compared to benchmark filters and state-of-the-art deep learning filtering methods
    corecore