910 research outputs found
Logarithmic distributions prove that intrinsic learning is Hebbian
In this paper, we present data for the lognormal distributions of spike
rates, synaptic weights and intrinsic excitability (gain) for neurons in
various brain areas, such as auditory or visual cortex, hippocampus,
cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of
heavy-tailed, specifically lognormal, distributions for rates, weights and
gains in all brain areas examined. The difference between strongly recurrent
and feed-forward connectivity (cortex vs. striatum and cerebellum),
neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of
activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns
out to be irrelevant for this feature. Logarithmic scale distribution of
weights and gains appears to be a general, functional property in all cases
analyzed. We then created a generic neural model to investigate adaptive
learning rules that create and maintain lognormal distributions. We
conclusively demonstrate that not only weights, but also intrinsic gains, need
to have strong Hebbian learning in order to produce and maintain the
experimentally attested distributions. This provides a solution to the
long-standing question about the type of plasticity exhibited by intrinsic
excitability
Time Resolution Dependence of Information Measures for Spiking Neurons: Atoms, Scaling, and Universality
The mutual information between stimulus and spike-train response is commonly
used to monitor neural coding efficiency, but neuronal computation broadly
conceived requires more refined and targeted information measures of
input-output joint processes. A first step towards that larger goal is to
develop information measures for individual output processes, including
information generation (entropy rate), stored information (statistical
complexity), predictable information (excess entropy), and active information
accumulation (bound information rate). We calculate these for spike trains
generated by a variety of noise-driven integrate-and-fire neurons as a function
of time resolution and for alternating renewal processes. We show that their
time-resolution dependence reveals coarse-grained structural properties of
interspike interval statistics; e.g., -entropy rates that diverge less
quickly than the firing rate indicate interspike interval correlations. We also
find evidence that the excess entropy and regularized statistical complexity of
different types of integrate-and-fire neurons are universal in the
continuous-time limit in the sense that they do not depend on mechanism
details. This suggests a surprising simplicity in the spike trains generated by
these model neurons. Interestingly, neurons with gamma-distributed ISIs and
neurons whose spike trains are alternating renewal processes do not fall into
the same universality class. These results lead to two conclusions. First, the
dependence of information measures on time resolution reveals mechanistic
details about spike train generation. Second, information measures can be used
as model selection tools for analyzing spike train processes.Comment: 20 pages, 6 figures;
http://csc.ucdavis.edu/~cmg/compmech/pubs/trdctim.ht
Universal Organization of Resting Brain Activity at the Thermodynamic Critical Point
Thermodynamic criticality describes emergent phenomena in a wide variety of
complex systems. In the mammalian brain, the complex dynamics that
spontaneously emerge from neuronal interactions have been characterized as
neuronal avalanches, a form of critical branching dynamics. Here, we show that
neuronal avalanches also reflect that the brain dynamics are organized close to
a thermodynamic critical point. We recorded spontaneous cortical activity in
monkeys and humans at rest using high-density intracranial microelectrode
arrays and magnetoencephalography, respectively. By numerically changing a
control parameter equivalent to thermodynamic temperature, we observed typical
critical behavior in cortical activities near the actual physiological
condition, including the phase transition of an order parameter, as well as the
divergence of susceptibility and specific heat. Finite-size scaling of these
quantities allowed us to derive robust critical exponents highly consistent
across monkey and humans that uncover a distinct, yet universal organization of
brain dynamics
Predictability, complexity and learning
We define {\em predictive information} as the mutual
information between the past and the future of a time series. Three
qualitatively different behaviors are found in the limit of large observation
times : can remain finite, grow logarithmically, or grow
as a fractional power law. If the time series allows us to learn a model with a
finite number of parameters, then grows logarithmically with
a coefficient that counts the dimensionality of the model space. In contrast,
power--law growth is associated, for example, with the learning of infinite
parameter (or nonparametric) models such as continuous functions with
smoothness constraints. There are connections between the predictive
information and measures of complexity that have been defined both in learning
theory and in the analysis of physical systems through statistical mechanics
and dynamical systems theory. Further, in the same way that entropy provides
the unique measure of available information consistent with some simple and
plausible conditions, we argue that the divergent part of
provides the unique measure for the complexity of dynamics underlying a time
series. Finally, we discuss how these ideas may be useful in different problems
in physics, statistics, and biology.Comment: 53 pages, 3 figures, 98 references, LaTeX2
Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience
This essay is presented with two principal objectives in mind: first, to
document the prevalence of fractals at all levels of the nervous system, giving
credence to the notion of their functional relevance; and second, to draw
attention to the as yet still unresolved issues of the detailed relationships
among power law scaling, self-similarity, and self-organized criticality. As
regards criticality, I will document that it has become a pivotal reference
point in Neurodynamics. Furthermore, I will emphasize the not yet fully
appreciated significance of allometric control processes. For dynamic fractals,
I will assemble reasons for attributing to them the capacity to adapt task
execution to contextual changes across a range of scales. The final Section
consists of general reflections on the implications of the reviewed data, and
identifies what appear to be issues of fundamental importance for future
research in the rapidly evolving topic of this review
Neutral theory and scale-free neural dynamics
Avalanches of electrochemical activity in brain networks have been
empirically reported to obey scale-invariant behavior --characterized by
power-law distributions up to some upper cut-off-- both in vitro and in vivo.
Elucidating whether such scaling laws stem from the underlying neural dynamics
operating at the edge of a phase transition is a fascinating possibility, as
systems poised at criticality have been argued to exhibit a number of important
functional advantages. Here we employ a well-known model for neural dynamics
with synaptic plasticity, to elucidate an alternative scenario in which
neuronal avalanches can coexist, overlapping in time, but still remaining
scale-free. Remarkably their scale-invariance does not stem from underlying
criticality nor self-organization at the edge of a continuous phase transition.
Instead, it emerges from the fact that perturbations to the system exhibit a
neutral drift --guided by demographic fluctuations-- with respect to endogenous
spontaneous activity. Such a neutral dynamics --similar to the one in neutral
theories of population genetics-- implies marginal propagation of activity,
characterized by power-law distributed causal avalanches. Importantly, our
results underline the importance of considering causal information --on which
neuron triggers the firing of which-- to properly estimate the statistics of
avalanches of neural activity. We discuss the implications of these findings
both in modeling and to elucidate experimental observations, as well as its
possible consequences for actual neural dynamics and information processing in
actual neural networks.Comment: Main text: 8 pages, 3 figures. Supplementary information: 5 pages, 4
figure
- …