7,255 research outputs found
Parameters estimation for spatio-temporal maximum entropy distributions: application to neural spike trains
We propose a numerical method to learn Maximum Entropy (MaxEnt) distributions
with spatio-temporal constraints from experimental spike trains. This is an
extension of two papers [10] and [4] who proposed the estimation of parameters
where only spatial constraints were taken into account. The extension we
propose allows to properly handle memory effects in spike statistics, for large
sized neural networks.Comment: 34 pages, 33 figure
A Bayesian Filtering Algorithm for Gaussian Mixture Models
A Bayesian filtering algorithm is developed for a class of state-space
systems that can be modelled via Gaussian mixtures. In general, the exact
solution to this filtering problem involves an exponential growth in the number
of mixture terms and this is handled here by utilising a Gaussian mixture
reduction step after both the time and measurement updates. In addition, a
square-root implementation of the unified algorithm is presented and this
algorithm is profiled on several simulated systems. This includes the state
estimation for two non-linear systems that are strictly outside the class
considered in this paper
KL-optimum designs: theoretical properties and practical computation
In this paper some new properties and computational tools for finding
KL-optimum designs are provided. KL-optimality is a general criterion useful to
select the best experimental conditions to discriminate between statistical
models. A KL-optimum design is obtained from a minimax optimization problem,
which is defined on a infinite-dimensional space. In particular, continuity of
the KL-optimality criterion is proved under mild conditions; as a consequence,
the first-order algorithm converges to the set of KL-optimum designs for a
large class of models. It is also shown that KL-optimum designs are invariant
to any scale-position transformation. Some examples are given and discussed,
together with some practical implications for numerical computation purposes.Comment: The final publication is available at Springer via
http://dx.doi.org/10.1007/s11222-014-9515-
Optimal projection of observations in a Bayesian setting
Optimal dimensionality reduction methods are proposed for the Bayesian
inference of a Gaussian linear model with additive noise in presence of
overabundant data. Three different optimal projections of the observations are
proposed based on information theory: the projection that minimizes the
Kullback-Leibler divergence between the posterior distributions of the original
and the projected models, the one that minimizes the expected Kullback-Leibler
divergence between the same distributions, and the one that maximizes the
mutual information between the parameter of interest and the projected
observations. The first two optimization problems are formulated as the
determination of an optimal subspace and therefore the solution is computed
using Riemannian optimization algorithms on the Grassmann manifold. Regarding
the maximization of the mutual information, it is shown that there exists an
optimal subspace that minimizes the entropy of the posterior distribution of
the reduced model; a basis of the subspace can be computed as the solution to a
generalized eigenvalue problem; an a priori error estimate on the mutual
information is available for this particular solution; and that the
dimensionality of the subspace to exactly conserve the mutual information
between the input and the output of the models is less than the number of
parameters to be inferred. Numerical applications to linear and nonlinear
models are used to assess the efficiency of the proposed approaches, and to
highlight their advantages compared to standard approaches based on the
principal component analysis of the observations
Recommended from our members
Sequential Monte Carlo with kernel embedded mappings: the mapping particle filter
In this work, a novel sequential Monte Carlo filter is introduced which aims at an efficient sampling of the state space. Particles are pushed forward from the prediction to the posterior density using a sequence of mappings that minimizes the Kullback-Leibler divergence between the posterior and the sequence of intermediate densities. The sequence of mappings represents a gradient flow based on the principles of local optimal transport. A key ingredient of the mappings is that they are embedded in a reproducing kernel Hilbert space, which allows for a practical and efficient Monte Carlo algorithm. The kernel embedding provides a direct means to calculate the gradient of the Kullback-Leibler divergence leading to quick convergence using well-known gradient-based stochastic optimization algorithms. Evaluation of the method is conducted in the chaotic Lorenz-63 system, the Lorenz-96 system, which is a coarse prototype of atmospheric dynamics, and an epidemic model that describes cholera dynamics. No resampling is required in the mapping particle filter even for long recursive sequences. The number of effective particles remains close to the total number of particles in all the sequence. Hence, the mapping particle filter does not suffer from sample impoverishment
- …