5,042 research outputs found
A common goodness-of-fit framework for neural population models using marked point process time-rescaling
A critical component of any statistical modeling procedure is the ability to assess the goodness-of-fit between a model and observed data. For spike train models of individual neurons, many goodness-of-fit measures rely on the time-rescaling theorem and assess model quality using rescaled spike times. Recently, there has been increasing interest in statistical models that describe the simultaneous spiking activity of neuron populations, either in a single brain region or across brain regions. Classically, such models have used spike sorted data to describe relationships between the identified neurons, but more recently clusterless modeling methods have been used to describe population activity using a single model. Here we develop a generalization of the time-rescaling theorem that enables comprehensive goodness-of-fit analysis for either of these classes of population models. We use the theory of marked point processes to model population spiking activity, and show that under the correct model, each spike can be rescaled individually to generate a uniformly distributed set of events in time and the space of spike marks. After rescaling, multiple well-established goodness-of-fit procedures and statistical tests are available. We demonstrate the application of these methods both to simulated data and real population spiking in rat hippocampus. We have made the MATLAB and Python code used for the analyses in this paper publicly available through our Github repository at https://github.com/Eden-Kramer-Lab/popTRT.This work was supported by grants from the NIH (MH105174, NS094288) and the Simons Foundation (542971). (MH105174 - NIH; NS094288 - NIH; 542971 - Simons Foundation)Published versio
Deterministic networks for probabilistic computing
Neural-network models of high-level brain functions such as memory recall and
reasoning often rely on the presence of stochasticity. The majority of these
models assumes that each neuron in the functional network is equipped with its
own private source of randomness, often in the form of uncorrelated external
noise. However, both in vivo and in silico, the number of noise sources is
limited due to space and bandwidth constraints. Hence, neurons in large
networks usually need to share noise sources. Here, we show that the resulting
shared-noise correlations can significantly impair the performance of
stochastic network models. We demonstrate that this problem can be overcome by
using deterministic recurrent neural networks as sources of uncorrelated noise,
exploiting the decorrelating effect of inhibitory feedback. Consequently, even
a single recurrent network of a few hundred neurons can serve as a natural
noise source for large ensembles of functional networks, each comprising
thousands of units. We successfully apply the proposed framework to a diverse
set of binary-unit networks with different dimensionalities and entropies, as
well as to a network reproducing handwritten digits with distinct predefined
frequencies. Finally, we show that the same design transfers to functional
networks of spiking neurons.Comment: 22 pages, 11 figure
Stimulus-dependent maximum entropy models of neural population codes
Neural populations encode information about their stimulus in a collective
fashion, by joint activity patterns of spiking and silence. A full account of
this mapping from stimulus to neural activity is given by the conditional
probability distribution over neural codewords given the sensory input. To be
able to infer a model for this distribution from large-scale neural recordings,
we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal
extension of the canonical linear-nonlinear model of a single neuron, to a
pairwise-coupled neural population. The model is able to capture the
single-cell response properties as well as the correlations in neural spiking
due to shared stimulus and due to effective neuron-to-neuron connections. Here
we show that in a population of 100 retinal ganglion cells in the salamander
retina responding to temporal white-noise stimuli, dependencies between cells
play an important encoding role. As a result, the SDME model gives a more
accurate account of single cell responses and in particular outperforms
uncoupled models in reproducing the distributions of codewords emitted in
response to a stimulus. We show how the SDME model, in conjunction with static
maximum entropy models of population vocabulary, can be used to estimate
information-theoretic quantities like surprise and information transmission in
a neural population.Comment: 11 pages, 7 figure
A simple mechanism for higher-order correlations in integrate-and-fire neurons
The collective dynamics of neural populations are often characterized in
terms of correlations in the spike activity of different neurons. Open
questions surround the basic nature of these correlations. In particular, what
leads to higher-order correlations -- correlations in the population activity
that extend beyond those expected from cell pairs? Here, we examine this
question for a simple, but ubiquitous, circuit feature: common fluctuating
input arriving to spiking neurons of integrate-and-fire type. We show that
leads to strong higher-order correlations, as for earlier work with discrete
threshold crossing models. Moreover, we find that the same is true for another
widely used, doubly-stochastic model of neural spiking, the linear-nonlinear
cascade. We explain the surprisingly strong connection between the collective
dynamics produced by these models, and conclude that higher-order correlations
are both broadly expected and possible to capture with surprising accuracy by
simplified (and tractable) descriptions of neural spiking
Mechanisms of Zero-Lag Synchronization in Cortical Motifs
Zero-lag synchronization between distant cortical areas has been observed in
a diversity of experimental data sets and between many different regions of the
brain. Several computational mechanisms have been proposed to account for such
isochronous synchronization in the presence of long conduction delays: Of
these, the phenomenon of "dynamical relaying" - a mechanism that relies on a
specific network motif - has proven to be the most robust with respect to
parameter mismatch and system noise. Surprisingly, despite a contrary belief in
the community, the common driving motif is an unreliable means of establishing
zero-lag synchrony. Although dynamical relaying has been validated in empirical
and computational studies, the deeper dynamical mechanisms and comparison to
dynamics on other motifs is lacking. By systematically comparing
synchronization on a variety of small motifs, we establish that the presence of
a single reciprocally connected pair - a "resonance pair" - plays a crucial
role in disambiguating those motifs that foster zero-lag synchrony in the
presence of conduction delays (such as dynamical relaying) from those that do
not (such as the common driving triad). Remarkably, minor structural changes to
the common driving motif that incorporate a reciprocal pair recover robust
zero-lag synchrony. The findings are observed in computational models of
spiking neurons, populations of spiking neurons and neural mass models, and
arise whether the oscillatory systems are periodic, chaotic, noise-free or
driven by stochastic inputs. The influence of the resonance pair is also robust
to parameter mismatch and asymmetrical time delays amongst the elements of the
motif. We call this manner of facilitating zero-lag synchrony resonance-induced
synchronization, outline the conditions for its occurrence, and propose that it
may be a general mechanism to promote zero-lag synchrony in the brain.Comment: 41 pages, 12 figures, and 11 supplementary figure
Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method
Understanding the dynamics of neural networks is a major challenge in
experimental neuroscience. For that purpose, a modelling of the recorded
activity that reproduces the main statistics of the data is required. In a
first part, we present a review on recent results dealing with spike train
statistics analysis using maximum entropy models (MaxEnt). Most of these
studies have been focusing on modelling synchronous spike patterns, leaving
aside the temporal dynamics of the neural activity. However, the maximum
entropy principle can be generalized to the temporal case, leading to Markovian
models where memory effects and time correlations in the dynamics are properly
taken into account. In a second part, we present a new method based on
Monte-Carlo sampling which is suited for the fitting of large-scale
spatio-temporal MaxEnt models. The formalism and the tools presented here will
be essential to fit MaxEnt spatio-temporal models to large neural ensembles.Comment: 41 pages, 10 figure
- …