1,933 research outputs found
Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method
Understanding the dynamics of neural networks is a major challenge in
experimental neuroscience. For that purpose, a modelling of the recorded
activity that reproduces the main statistics of the data is required. In a
first part, we present a review on recent results dealing with spike train
statistics analysis using maximum entropy models (MaxEnt). Most of these
studies have been focusing on modelling synchronous spike patterns, leaving
aside the temporal dynamics of the neural activity. However, the maximum
entropy principle can be generalized to the temporal case, leading to Markovian
models where memory effects and time correlations in the dynamics are properly
taken into account. In a second part, we present a new method based on
Monte-Carlo sampling which is suited for the fitting of large-scale
spatio-temporal MaxEnt models. The formalism and the tools presented here will
be essential to fit MaxEnt spatio-temporal models to large neural ensembles.Comment: 41 pages, 10 figure
Information entropy production of maximum entropy markov chains from spike trains
"The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics.
Entropy-based parametric estimation of spike train statistics
We consider the evolution of a network of neurons, focusing on the asymptotic
behavior of spikes dynamics instead of membrane potential dynamics. The spike
response is not sought as a deterministic response in this context, but as a
conditional probability : "Reading out the code" consists of inferring such a
probability. This probability is computed from empirical raster plots, by using
the framework of thermodynamic formalism in ergodic theory. This gives us a
parametric statistical model where the probability has the form of a Gibbs
distribution. In this respect, this approach generalizes the seminal and
profound work of Schneidman and collaborators. A minimal presentation of the
formalism is reviewed here, while a general algorithmic estimation method is
proposed yielding fast convergent implementations. It is also made explicit how
several spike observables (entropy, rate, synchronizations, correlations) are
given in closed-form from the parametric estimation. This paradigm does not
only allow us to estimate the spike statistics, given a design choice, but also
to compare different models, thus answering comparative questions about the
neural code such as : "are correlations (or time synchrony or a given set of
spike patterns, ..) significant with respect to rate coding only ?" A numerical
validation of the method is proposed and the perspectives regarding spike-train
code analysis are also discussed.Comment: 37 pages, 8 figures, submitte
Linear response for spiking neuronal networks with unbounded memory
We establish a general linear response relation for spiking neuronal
networks, based on chains with unbounded memory. This relation allows us to
predict the influence of a weak amplitude time-dependent external stimuli on
spatio-temporal spike correlations, from the spontaneous statistics (without
stimulus) in a general context where the memory in spike dynamics can extend
arbitrarily far in the past. Using this approach, we show how linear response
is explicitly related to neuronal dynamics with an example, the gIF model,
introduced by M. Rudolph and A. Destexhe. This example illustrates the
collective effect of the stimuli, intrinsic neuronal dynamics, and network
connectivity on spike statistics. We illustrate our results with numerical
simulations.Comment: 60 pages, 8 figure
On the Geometric Ergodicity of Metropolis-Hastings Algorithms for Lattice Gaussian Sampling
Sampling from the lattice Gaussian distribution is emerging as an important
problem in coding and cryptography. In this paper, the classic
Metropolis-Hastings (MH) algorithm from Markov chain Monte Carlo (MCMC) methods
is adapted for lattice Gaussian sampling. Two MH-based algorithms are proposed,
which overcome the restriction suffered by the default Klein's algorithm. The
first one, referred to as the independent Metropolis-Hastings-Klein (MHK)
algorithm, tries to establish a Markov chain through an independent proposal
distribution. We show that the Markov chain arising from the independent MHK
algorithm is uniformly ergodic, namely, it converges to the stationary
distribution exponentially fast regardless of the initial state. Moreover, the
rate of convergence is explicitly calculated in terms of the theta series,
leading to a predictable mixing time. In order to further exploit the
convergence potential, a symmetric Metropolis-Klein (SMK) algorithm is
proposed. It is proven that the Markov chain induced by the SMK algorithm is
geometrically ergodic, where a reasonable selection of the initial state is
capable to enhance the convergence performance.Comment: Submitted to IEEE Transactions on Information Theor
On the existence and non-existence of finitary codings for a class of random fields
We study the existence of finitary codings (also called finitary homomorphisms or finitary factor maps) from a finite-valued i.i.d. process to certain random fields. For Markov random fields we show, using ideas of Marton and Shields, that the presence of a phase transition is an obstruction for the existence of the above coding: this yields a large class of Bernoulli shifts for which no such coding exists. Conversely, we show that for the stationary distribution of a monotone exponentially ergodic probabilistic cellular automaton such a coding does exist. The construction of the coding is partially inspired by the Propp-Wilson algorithm for exact simulation. In particular, combining our results with a theorem of Martinelli and Olivieri, we obtain the fact that for the plus state for the ferromagnetic Ising model on , , there is such a coding when the interaction parameter is below its critical value and there is no such coding when the interaction parameter is above its critical value
- …