62,836 research outputs found
Fast maximum likelihood estimation using continuous-time neural point process models
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np2) to O(qp2). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.National Science Foundation (U.S.) (NSF grant DMS-1042134
A unified approach to linking experimental, statistical and computational analysis of spike train data
A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data), but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach – linking statistical, computational, and experimental neuroscience – provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data.Published versio
{\sc CosmoNet}: fast cosmological parameter estimation in non-flat models using neural networks
We present a further development of a method for accelerating the calculation
of CMB power spectra, matter power spectra and likelihood functions for use in
cosmological Bayesian inference. The algorithm, called {\sc CosmoNet}, is based
on training a multilayer perceptron neural network. We compute CMB power
spectra (up to ) and matter transfer functions over a hypercube in
parameter space encompassing the confidence region of a selection of
CMB (WMAP + high resolution experiments) and large scale structure surveys (2dF
and SDSS). We work in the framework of a generic 7 parameter non-flat
cosmology. Additionally we use {\sc CosmoNet} to compute the WMAP 3-year, 2dF
and SDSS likelihoods over the same region. We find that the average error in
the power spectra is typically well below cosmic variance for spectra, and
experimental likelihoods calculated to within a fraction of a log unit. We
demonstrate that marginalised posteriors generated with {\sc CosmoNet} spectra
agree to within a few percent of those generated by {\sc CAMB} parallelised
over 4 CPUs, but are obtained 2-3 times faster on just a \emph{single}
processor. Furthermore posteriors generated directly via {\sc CosmoNet}
likelihoods can be obtained in less than 30 minutes on a single processor,
corresponding to a speed up of a factor of . We also demonstrate the
capabilities of {\sc CosmoNet} by extending the CMB power spectra and matter
transfer function training to a more generic 10 parameter cosmological model,
including tensor modes, a varying equation of state of dark energy and massive
neutrinos. {\sc CosmoNet} and interfaces to both {\sc CosmoMC} and {\sc
Bayesys} are publically available at {\tt
www.mrao.cam.ac.uk/software/cosmonet}.Comment: 8 pages, submitted to MNRA
The Neural Particle Filter
The robust estimation of dynamically changing features, such as the position
of prey, is one of the hallmarks of perception. On an abstract, algorithmic
level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing
signals based on the history of observations, provides a mathematical framework
for dynamic perception in real time. Since the general, nonlinear filtering
problem is analytically intractable, particle filters are considered among the
most powerful approaches to approximating the solution numerically. Yet, these
algorithms prevalently rely on importance weights, and thus it remains an
unresolved question how the brain could implement such an inference strategy
with a neuronal population. Here, we propose the Neural Particle Filter (NPF),
a weight-less particle filter that can be interpreted as the neuronal dynamics
of a recurrently connected neural network that receives feed-forward input from
sensory neurons and represents the posterior probability distribution in terms
of samples. Specifically, this algorithm bridges the gap between the
computational task of online state estimation and an implementation that allows
networks of neurons in the brain to perform nonlinear Bayesian filtering. The
model captures not only the properties of temporal and multisensory integration
according to Bayesian statistics, but also allows online learning with a
maximum likelihood approach. With an example from multisensory integration, we
demonstrate that the numerical performance of the model is adequate to account
for both filtering and identification problems. Due to the weightless approach,
our algorithm alleviates the 'curse of dimensionality' and thus outperforms
conventional, weighted particle filters in higher dimensions for a limited
number of particles
Particle-filtering approaches for nonlinear Bayesian decoding of neuronal spike trains
The number of neurons that can be simultaneously recorded doubles every seven
years. This ever increasing number of recorded neurons opens up the possibility
to address new questions and extract higher dimensional stimuli from the
recordings. Modeling neural spike trains as point processes, this task of
extracting dynamical signals from spike trains is commonly set in the context
of nonlinear filtering theory. Particle filter methods relying on importance
weights are generic algorithms that solve the filtering task numerically, but
exhibit a serious drawback when the problem dimensionality is high: they are
known to suffer from the 'curse of dimensionality' (COD), i.e. the number of
particles required for a certain performance scales exponentially with the
observable dimensions. Here, we first briefly review the theory on filtering
with point process observations in continuous time. Based on this theory, we
investigate both analytically and numerically the reason for the COD of
weighted particle filtering approaches: Similarly to particle filtering with
continuous-time observations, the COD with point-process observations is due to
the decay of effective number of particles, an effect that is stronger when the
number of observable dimensions increases. Given the success of unweighted
particle filtering approaches in overcoming the COD for continuous- time
observations, we introduce an unweighted particle filter for point-process
observations, the spike-based Neural Particle Filter (sNPF), and show that it
exhibits a similar favorable scaling as the number of dimensions grows.
Further, we derive rules for the parameters of the sNPF from a maximum
likelihood approach learning. We finally employ a simple decoding task to
illustrate the capabilities of the sNPF and to highlight one possible future
application of our inference and learning algorithm
BAMBI: blind accelerated multimodal Bayesian inference
In this paper we present an algorithm for rapid Bayesian analysis that
combines the benefits of nested sampling and artificial neural networks. The
blind accelerated multimodal Bayesian inference (BAMBI) algorithm implements
the MultiNest package for nested sampling as well as the training of an
artificial neural network (NN) to learn the likelihood function. In the case of
computationally expensive likelihoods, this allows the substitution of a much
more rapid approximation in order to increase significantly the speed of the
analysis. We begin by demonstrating, with a few toy examples, the ability of a
NN to learn complicated likelihood surfaces. BAMBI's ability to decrease
running time for Bayesian inference is then demonstrated in the context of
estimating cosmological parameters from Wilkinson Microwave Anisotropy Probe
and other observations. We show that valuable speed increases are achieved in
addition to obtaining NNs trained on the likelihood functions for the different
model and data combinations. These NNs can then be used for an even faster
follow-up analysis using the same likelihood and different priors. This is a
fully general algorithm that can be applied, without any pre-processing, to
other problems with computationally expensive likelihood functions.Comment: 12 pages, 8 tables, 17 figures; accepted by MNRAS; v2 to reflect
minor changes in published versio
Hierarchical Implicit Models and Likelihood-Free Variational Inference
Implicit probabilistic models are a flexible class of models defined by a
simulation process for data. They form the basis for theories which encompass
our understanding of the physical world. Despite this fundamental nature, the
use of implicit models remains limited due to challenges in specifying complex
latent structure in them, and in performing inferences in such models with
large data sets. In this paper, we first introduce hierarchical implicit models
(HIMs). HIMs combine the idea of implicit densities with hierarchical Bayesian
modeling, thereby defining models via simulators of data with rich hidden
structure. Next, we develop likelihood-free variational inference (LFVI), a
scalable variational inference algorithm for HIMs. Key to LFVI is specifying a
variational family that is also implicit. This matches the model's flexibility
and allows for accurate approximation of the posterior. We demonstrate diverse
applications: a large-scale physical simulator for predator-prey populations in
ecology; a Bayesian generative adversarial network for discrete data; and a
deep implicit model for text generation.Comment: Appears in Neural Information Processing Systems, 201
- …