660 research outputs found
Auxiliary-variable Exact Hamiltonian Monte Carlo Samplers for Binary Distributions
We present a new approach to sample from generic binary distributions, based
on an exact Hamiltonian Monte Carlo algorithm applied to a piecewise continuous
augmentation of the binary distribution of interest. An extension of this idea
to distributions over mixtures of binary and possibly-truncated Gaussian or
exponential variables allows us to sample from posteriors of linear and probit
regression models with spike-and-slab priors and truncated parameters. We
illustrate the advantages of these algorithms in several examples in which they
outperform the Metropolis or Gibbs samplers.Comment: 11 pages, 4 figures. Proceedings of the 27th Annual Conference Neural
Information Processing Systems (NIPS), 201
A Bayesian approach for inferring neuronal connectivity from calcium fluorescent imaging data
Deducing the structure of neural circuits is one of the central problems of
modern neuroscience. Recently-introduced calcium fluorescent imaging methods
permit experimentalists to observe network activity in large populations of
neurons, but these techniques provide only indirect observations of neural
spike trains, with limited time resolution and signal quality. In this work we
present a Bayesian approach for inferring neural circuitry given this type of
imaging data. We model the network activity in terms of a collection of coupled
hidden Markov chains, with each chain corresponding to a single neuron in the
network and the coupling between the chains reflecting the network's
connectivity matrix. We derive a Monte Carlo Expectation--Maximization
algorithm for fitting the model parameters; to obtain the sufficient statistics
in a computationally-efficient manner, we introduce a specialized
blockwise-Gibbs algorithm for sampling from the joint activity of all observed
neurons given the observed fluorescence data. We perform large-scale
simulations of randomly connected neuronal networks with biophysically
realistic parameters and find that the proposed methods can accurately infer
the connectivity in these networks given reasonable experimental and
computational constraints. In addition, the estimation accuracy may be improved
significantly by incorporating prior knowledge about the sparseness of
connectivity in the network, via standard L penalization methods.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS303 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Bayesian spike inference from calcium imaging data
We present efficient Bayesian methods for extracting neuronal spiking
information from calcium imaging data. The goal of our methods is to sample
from the posterior distribution of spike trains and model parameters (baseline
concentration, spike amplitude etc) given noisy calcium imaging data. We
present discrete time algorithms where we sample the existence of a spike at
each time bin using Gibbs methods, as well as continuous time algorithms where
we sample over the number of spikes and their locations at an arbitrary
resolution using Metropolis-Hastings methods for point processes. We provide
Rao-Blackwellized extensions that (i) marginalize over several model parameters
and (ii) provide smooth estimates of the marginal spike posterior distribution
in continuous time. Our methods serve as complements to standard point
estimates and allow for quantification of uncertainty in estimating the
underlying spike train and model parameters
Reparameterizing the Birkhoff Polytope for Variational Permutation Inference
Many matching, tracking, sorting, and ranking problems require probabilistic
reasoning about possible permutations, a set that grows factorially with
dimension. Combinatorial optimization algorithms may enable efficient point
estimation, but fully Bayesian inference poses a severe challenge in this
high-dimensional, discrete space. To surmount this challenge, we start with the
usual step of relaxing a discrete set (here, of permutation matrices) to its
convex hull, which here is the Birkhoff polytope: the set of all
doubly-stochastic matrices. We then introduce two novel transformations: first,
an invertible and differentiable stick-breaking procedure that maps
unconstrained space to the Birkhoff polytope; second, a map that rounds points
toward the vertices of the polytope. Both transformations include a temperature
parameter that, in the limit, concentrates the densities on permutation
matrices. We then exploit these transformations and reparameterization
gradients to introduce variational inference over permutation matrices, and we
demonstrate its utility in a series of experiments
- …