12 research outputs found
Time-varying optimization for Spike Inference from Multi-Trial Calcium Recordings
Optical imaging of genetically encoded calcium indicators is a powerful tool
to record the activity of a large number of neurons simultaneously over a long
period of time from freely behaving animals. However, determining the exact
time at which a neuron spikes and estimating the underlying firing rate from
calcium fluorescence data remains challenging, especially for calcium imaging
data obtained from a longitudinal study. We propose a multi-trial time-varying
penalized method to jointly detect spikes and estimate firing rates by
robustly integrating evolving neural dynamics across trials. Our simulation
study shows that the proposed method performs well in both spike detection and
firing rate estimation. We demonstrate the usefulness of our method on calcium
fluorescence trace data from two studies, with the first study showing
differential firing rate functions between two behaviors and the second study
showing evolving firing rate function across trials due to learning
A Semiparametric Bayesian Model for Detecting Synchrony Among Multiple Neurons
We propose a scalable semiparametric Bayesian model to capture dependencies
among multiple neurons by detecting their co-firing (possibly with some lag
time) patterns over time. After discretizing time so there is at most one spike
at each interval, the resulting sequence of 1's (spike) and 0's (silence) for
each neuron is modeled using the logistic function of a continuous latent
variable with a Gaussian process prior. For multiple neurons, the corresponding
marginal distributions are coupled to their joint probability distribution
using a parametric copula model. The advantages of our approach are as follows:
the nonparametric component (i.e., the Gaussian process model) provides a
flexible framework for modeling the underlying firing rates; the parametric
component (i.e., the copula model) allows us to make inference regarding both
contemporaneous and lagged relationships among neurons; using the copula model,
we construct multivariate probabilistic models by separating the modeling of
univariate marginal distributions from the modeling of dependence structure
among variables; our method is easy to implement using a computationally
efficient sampling algorithm that can be easily extended to high dimensional
problems. Using simulated data, we show that our approach could correctly
capture temporal dependencies in firing rates and identify synchronous neurons.
We also apply our model to spike train data obtained from prefrontal cortical
areas in rat's brain
A statistical model for in vivo neuronal dynamics
Single neuron models have a long tradition in computational neuroscience.
Detailed biophysical models such as the Hodgkin-Huxley model as well as
simplified neuron models such as the class of integrate-and-fire models relate
the input current to the membrane potential of the neuron. Those types of
models have been extensively fitted to in vitro data where the input current is
controlled. Those models are however of little use when it comes to
characterize intracellular in vivo recordings since the input to the neuron is
not known. Here we propose a novel single neuron model that characterizes the
statistical properties of in vivo recordings. More specifically, we propose a
stochastic process where the subthreshold membrane potential follows a Gaussian
process and the spike emission intensity depends nonlinearly on the membrane
potential as well as the spiking history. We first show that the model has a
rich dynamical repertoire since it can capture arbitrary subthreshold
autocovariance functions, firing-rate adaptations as well as arbitrary shapes
of the action potential. We then show that this model can be efficiently fitted
to data without overfitting. Finally, we show that this model can be used to
characterize and therefore precisely compare various intracellular in vivo
recordings from different animals and experimental conditions.Comment: 31 pages, 10 figure
Modelling multivariate discrete data with latent Gaussian processes
Multivariate count data are common in some fields, such as sports, neuroscience, and text mining. Models that can accurately perform factor analysis are required, especially for structured data, such as time-series count matrices. We present Poisson Factor Analysis using Latent Gaussian Processes, a novel method for analyzing multivariate count data. Our approach allows for non-i.i.d observations, which are linked in the latent space using a Gaussian Process. Due to an exponential non-linearity in the model, there is no closed form solution. Thus, we resort to an expectation maximization approach with a Laplace approximation for tractable inference. We present results on several data sets, both synthetic and real, of a comparison with other factor analysis methods. Our method is both qualitatively and quantitatively superior for non-i.i.d Poisson data, because the assumptions it makes are well suited for the data
Inferring neural firing rates from spike trains using Gaussian processes
Neural spike trains present challenges to analytical efforts due to their noisy, spiking nature. Many studies of neuroscientific and neural prosthetic importance rely on a smoothed, denoised estimate of the spike train’s underlying firing rate. Current techniques to find time-varying firing rates require ad hoc choices of parameters, offer no confidence intervals on their estimates, and can obscure potentially important single trial variability. We present a new method, based on a Gaussian Process prior, for inferring probabilistically optimal estimates of firing rate functions underlying single or multiple neural spike trains. We test the performance of the method on simulated data and experimentally gathered neural spike trains, and we demonstrate improvements over conventional estimators.
Markov chain Monte Carlo for continuous-time discrete-state systems
A variety of phenomena are best described using dynamical models which operate on a discrete state space and in continuous time. Examples include Markov (and semi-Markov) jump processes, continuous-time Bayesian networks, renewal processes and other point processes. These continuous-time, discrete-state models are ideal building blocks for Bayesian models in fields such as systems biology, genetics, chemistry, computing networks, human-computer interactions etc. However, a challenge towards their more widespread use is the computational burden of posterior inference; this typically involves approximations like time discretization and can be computationally intensive. In this thesis, we describe a new class of Markov chain Monte Carlo methods that allow efficient computation while still being exact. The core idea is an auxiliary variable Gibbs sampler that alternately resamples a random discretization of time given the state-trajectory of the system, and then samples a new trajectory given this discretization. We introduce this idea by relating it to a classical idea called uniformization, and use it to develop algorithms that outperform the state-of-the-art for models based on the Markov jump process. We then extend the scope of these samplers to a wider class of models such as nonstationary renewal processes, and semi-Markov jump processes. By developing a more general framework beyond uniformization, we remedy various limitations of the original algorithms, allowing us to develop MCMC samplers for systems with infinite state spaces, unbounded rates, as well as systems indexed by more general continuous spaces than time