19,942 research outputs found
Neural avalanches at the edge-of-chaos?
Does the brain operate at criticality, to optimize neural computation? Literature uses different fingerprints of criticality in neural networks, leaving the relationship between them mostly unclear. Here, we compare two specific signatures of criticality, and ask whether they refer to observables at the same critical point, or to two differing phase transitions. Using a recurrent spiking neural network, we demonstrate that avalanche criticality does not necessarily lie at edge-of-chaos
Dynamic Adaptive Computation: Tuning network states to task requirements
Neural circuits are able to perform computations under very diverse
conditions and requirements. The required computations impose clear constraints
on their fine-tuning: a rapid and maximally informative response to stimuli in
general requires decorrelated baseline neural activity. Such network dynamics
is known as asynchronous-irregular. In contrast, spatio-temporal integration of
information requires maintenance and transfer of stimulus information over
extended time periods. This can be realized at criticality, a phase transition
where correlations, sensitivity and integration time diverge. Being able to
flexibly switch, or even combine the above properties in a task-dependent
manner would present a clear functional advantage. We propose that cortex
operates in a "reverberating regime" because it is particularly favorable for
ready adaptation of computational properties to context and task. This
reverberating regime enables cortical networks to interpolate between the
asynchronous-irregular and the critical state by small changes in effective
synaptic strength or excitation-inhibition ratio. These changes directly adapt
computational properties, including sensitivity, amplification, integration
time and correlation length within the local network. We review recent
converging evidence that cortex in vivo operates in the reverberating regime,
and that various cortical areas have adapted their integration times to
processing requirements. In addition, we propose that neuromodulation enables a
fine-tuning of the network, so that local circuits can either decorrelate or
integrate, and quench or maintain their input depending on task. We argue that
this task-dependent tuning, which we call "dynamic adaptive computation",
presents a central organization principle of cortical networks and discuss
first experimental evidence.Comment: 6 pages + references, 2 figure
Transient Information Flow in a Network of Excitatory and Inhibitory Model Neurons: Role of Noise and Signal Autocorrelation
We investigate the performance of sparsely-connected networks of
integrate-and-fire neurons for ultra-short term information processing. We
exploit the fact that the population activity of networks with balanced
excitation and inhibition can switch from an oscillatory firing regime to a
state of asynchronous irregular firing or quiescence depending on the rate of
external background spikes.
We find that in terms of information buffering the network performs best for
a moderate, non-zero, amount of noise. Analogous to the phenomenon of
stochastic resonance the performance decreases for higher and lower noise
levels. The optimal amount of noise corresponds to the transition zone between
a quiescent state and a regime of stochastic dynamics. This provides a
potential explanation on the role of non-oscillatory population activity in a
simplified model of cortical micro-circuits.Comment: 27 pages, 7 figures, to appear in J. Physiology (Paris) Vol. 9
Neural Information Processing: between synchrony and chaos
The brain is characterized by performing many different processing tasks ranging from elaborate processes such as pattern recognition, memory or decision-making to more simple functionalities such as linear filtering in image processing. Understanding the mechanisms by which the brain is able to produce such a different range of cortical operations remains a fundamental problem in neuroscience. Some recent empirical and theoretical results support the notion that the brain is naturally poised between ordered and chaotic states. As the largest number of metastable states exists at a point near the transition, the brain therefore has access to a larger repertoire of behaviours. Consequently, it is of high interest to know which type of processing can be associated with both ordered and disordered states. Here we show an explanation of which processes are related to chaotic and synchronized states based on the study of in-silico implementation of biologically plausible neural systems. The measurements obtained reveal that synchronized cells (that can be understood as ordered states of the brain) are related to non-linear computations, while uncorrelated neural ensembles are excellent information transmission systems that are able to implement linear transformations (as the realization of convolution products) and to parallelize neural processes. From these results we propose a plausible meaning for Hebbian and non-Hebbian learning rules as those biophysical mechanisms by which the brain creates ordered or chaotic ensembles depending on the desired functionality. The measurements that we obtain from the hardware implementation of different neural systems endorse the fact that the brain is working with two different states, ordered and chaotic, with complementary functionalities that imply non-linear processing (synchronized states) and information transmission and convolution (chaotic states)
Intrinsic adaptation in autonomous recurrent neural networks
A massively recurrent neural network responds on one side to input stimuli
and is autonomously active, on the other side, in the absence of sensory
inputs. Stimuli and information processing depends crucially on the qualia of
the autonomous-state dynamics of the ongoing neural activity. This default
neural activity may be dynamically structured in time and space, showing
regular, synchronized, bursting or chaotic activity patterns.
We study the influence of non-synaptic plasticity on the default dynamical
state of recurrent neural networks. The non-synaptic adaption considered acts
on intrinsic neural parameters, such as the threshold and the gain, and is
driven by the optimization of the information entropy. We observe, in the
presence of the intrinsic adaptation processes, three distinct and globally
attracting dynamical regimes, a regular synchronized, an overall chaotic and an
intermittent bursting regime. The intermittent bursting regime is characterized
by intervals of regular flows, which are quite insensitive to external stimuli,
interseeded by chaotic bursts which respond sensitively to input signals. We
discuss these finding in the context of self-organized information processing
and critical brain dynamics.Comment: 24 pages, 8 figure
Power-law statistics and universal scaling in the absence of criticality
Critical states are sometimes identified experimentally through power-law
statistics or universal scaling functions. We show here that such features
naturally emerge from networks in self-sustained irregular regimes away from
criticality. In these regimes, statistical physics theory of large interacting
systems predict a regime where the nodes have independent and identically
distributed dynamics. We thus investigated the statistics of a system in which
units are replaced by independent stochastic surrogates, and found the same
power-law statistics, indicating that these are not sufficient to establish
criticality. We rather suggest that these are universal features of large-scale
networks when considered macroscopically. These results put caution on the
interpretation of scaling laws found in nature.Comment: in press in Phys. Rev.
- …