2,765 research outputs found
Learning Hybrid System Models for Supervisory Decoding of Discrete State, with applications to the Parietal Reach Region
Based on Gibbs sampling, a novel method to identify mathematical models of neural activity in response to temporal changes of behavioral or cognitive state is presented. This work is motivated by the developing field of neural prosthetics, where a supervisory controller is required to classify activity of a brain region into suitable discrete modes. Here, neural activity in each discrete mode is modeled with nonstationary point processes, and transitions between modes are modeled as hidden Markov models. The effectiveness of this framework is first demonstrated on a simulated example. The identification algorithm is then applied to extracellular neural activity recorded from multi-electrode arrays in the parietal reach region of a rhesus monkey, and the results demonstrate the ability to decode discrete changes even from small data sets
Dynamic Adaptive Computation: Tuning network states to task requirements
Neural circuits are able to perform computations under very diverse
conditions and requirements. The required computations impose clear constraints
on their fine-tuning: a rapid and maximally informative response to stimuli in
general requires decorrelated baseline neural activity. Such network dynamics
is known as asynchronous-irregular. In contrast, spatio-temporal integration of
information requires maintenance and transfer of stimulus information over
extended time periods. This can be realized at criticality, a phase transition
where correlations, sensitivity and integration time diverge. Being able to
flexibly switch, or even combine the above properties in a task-dependent
manner would present a clear functional advantage. We propose that cortex
operates in a "reverberating regime" because it is particularly favorable for
ready adaptation of computational properties to context and task. This
reverberating regime enables cortical networks to interpolate between the
asynchronous-irregular and the critical state by small changes in effective
synaptic strength or excitation-inhibition ratio. These changes directly adapt
computational properties, including sensitivity, amplification, integration
time and correlation length within the local network. We review recent
converging evidence that cortex in vivo operates in the reverberating regime,
and that various cortical areas have adapted their integration times to
processing requirements. In addition, we propose that neuromodulation enables a
fine-tuning of the network, so that local circuits can either decorrelate or
integrate, and quench or maintain their input depending on task. We argue that
this task-dependent tuning, which we call "dynamic adaptive computation",
presents a central organization principle of cortical networks and discuss
first experimental evidence.Comment: 6 pages + references, 2 figure
Spike train statistics and Gibbs distributions
This paper is based on a lecture given in the LACONEU summer school,
Valparaiso, January 2012. We introduce Gibbs distribution in a general setting,
including non stationary dynamics, and present then three examples of such
Gibbs distributions, in the context of neural networks spike train statistics:
(i) Maximum entropy model with spatio-temporal constraints; (ii) Generalized
Linear Models; (iii) Conductance based Inte- grate and Fire model with chemical
synapses and gap junctions.Comment: 23 pages, submitte
Spike trains statistics in Integrate and Fire Models: exact results
We briefly review and highlight the consequences of rigorous and exact
results obtained in \cite{cessac:10}, characterizing the statistics of spike
trains in a network of leaky Integrate-and-Fire neurons, where time is discrete
and where neurons are subject to noise, without restriction on the synaptic
weights connectivity. The main result is that spike trains statistics are
characterized by a Gibbs distribution, whose potential is explicitly
computable. This establishes, on one hand, a rigorous ground for the current
investigations attempting to characterize real spike trains data with Gibbs
distributions, such as the Ising-like distribution, using the maximal entropy
principle. However, it transpires from the present analysis that the Ising
model might be a rather weak approximation. Indeed, the Gibbs potential (the
formal "Hamiltonian") is the log of the so-called "conditional intensity" (the
probability that a neuron fires given the past of the whole network). But, in
the present example, this probability has an infinite memory, and the
corresponding process is non-Markovian (resp. the Gibbs potential has infinite
range). Moreover, causality implies that the conditional intensity does not
depend on the state of the neurons at the \textit{same time}, ruling out the
Ising model as a candidate for an exact characterization of spike trains
statistics. However, Markovian approximations can be proposed whose degree of
approximation can be rigorously controlled. In this setting, Ising model
appears as the "next step" after the Bernoulli model (independent neurons)
since it introduces spatial pairwise correlations, but not time correlations.
The range of validity of this approximation is discussed together with possible
approaches allowing to introduce time correlations, with algorithmic
extensions.Comment: 6 pages, submitted to conference NeuroComp2010
http://2010.neurocomp.fr/; Bruno Cessac
http://www-sop.inria.fr/neuromathcomp
A Survey on Continuous Time Computations
We provide an overview of theories of continuous time computation. These
theories allow us to understand both the hardness of questions related to
continuous time dynamical systems and the computational power of continuous
time analog models. We survey the existing models, summarizing results, and
point to relevant references in the literature
How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation
This paper addresses two questions in the context of neuronal networks
dynamics, using methods from dynamical systems theory and statistical physics:
(i) How to characterize the statistical properties of sequences of action
potentials ("spike trains") produced by neuronal networks ? and; (ii) what are
the effects of synaptic plasticity on these statistics ? We introduce a
framework in which spike trains are associated to a coding of membrane
potential trajectories, and actually, constitute a symbolic coding in important
explicit examples (the so-called gIF models). On this basis, we use the
thermodynamic formalism from ergodic theory to show how Gibbs distributions are
natural probability measures to describe the statistics of spike trains, given
the empirical averages of prescribed quantities. As a second result, we show
that Gibbs distributions naturally arise when considering "slow" synaptic
plasticity rules where the characteristic time for synapse adaptation is quite
longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure
- …