34 research outputs found
Avalanches in self-organized critical neural networks: A minimal model for the neural SOC universality class
The brain keeps its overall dynamics in a corridor of intermediate activity
and it has been a long standing question what possible mechanism could achieve
this task. Mechanisms from the field of statistical physics have long been
suggesting that this homeostasis of brain activity could occur even without a
central regulator, via self-organization on the level of neurons and their
interactions, alone. Such physical mechanisms from the class of self-organized
criticality exhibit characteristic dynamical signatures, similar to seismic
activity related to earthquakes. Measurements of cortex rest activity showed
first signs of dynamical signatures potentially pointing to self-organized
critical dynamics in the brain. Indeed, recent more accurate measurements
allowed for a detailed comparison with scaling theory of non-equilibrium
critical phenomena, proving the existence of criticality in cortex dynamics. We
here compare this new evaluation of cortex activity data to the predictions of
the earliest physics spin model of self-organized critical neural networks. We
find that the model matches with the recent experimental data and its
interpretation in terms of dynamical signatures for criticality in the brain.
The combination of signatures for criticality, power law distributions of
avalanche sizes and durations, as well as a specific scaling relationship
between anomalous exponents, defines a universality class characteristic of the
particular critical phenomenon observed in the neural experiments. The spin
model is a candidate for a minimal model of a self-organized critical adaptive
network for the universality class of neural criticality. As a prototype model,
it provides the background for models that include more biological details, yet
share the same universality class characteristic of the homeostasis of activity
in the brain.Comment: 17 pages, 5 figure
A Fokker-Planck formalism for diffusion with finite increments and absorbing boundaries
Gaussian white noise is frequently used to model fluctuations in physical
systems. In Fokker-Planck theory, this leads to a vanishing probability density
near the absorbing boundary of threshold models. Here we derive the boundary
condition for the stationary density of a first-order stochastic differential
equation for additive finite-grained Poisson noise and show that the response
properties of threshold units are qualitatively altered. Applied to the
integrate-and-fire neuron model, the response turns out to be instantaneous
rather than exhibiting low-pass characteristics, highly non-linear, and
asymmetric for excitation and inhibition. The novel mechanism is exhibited on
the network level and is a generic property of pulse-coupled systems of
threshold units.Comment: Consists of two parts: main article (3 figures) plus supplementary
text (3 extra figures
The capabilities and limitations of conductance-based compartmental neuron models with reduced branched or unbranched morphologies and active dendrites
Conductance-based neuron models are frequently employed to study the dynamics of biological neural networks. For speed and ease of use, these models are often reduced in morphological complexity. Simplified dendritic branching structures may process inputs differently than full branching structures, however, and could thereby fail to reproduce important aspects of biological neural processing. It is not yet well understood which processing capabilities require detailed branching structures. Therefore, we analyzed the processing capabilities of full or partially branched reduced models. These models were created by collapsing the dendritic tree of a full morphological model of a globus pallidus (GP) neuron while preserving its total surface area and electrotonic length, as well as its passive and active parameters. Dendritic trees were either collapsed into single cables (unbranched models) or the full complement of branch points was preserved (branched models). Both reduction strategies allowed us to compare dynamics between all models using the same channel density settings. Full model responses to somatic inputs were generally preserved by both types of reduced model while dendritic input responses could be more closely preserved by branched than unbranched reduced models. However, features strongly influenced by local dendritic input resistance, such as active dendritic sodium spike generation and propagation, could not be accurately reproduced by any reduced model. Based on our analyses, we suggest that there are intrinsic differences in processing capabilities between unbranched and branched models. We also indicate suitable applications for different levels of reduction, including fast searches of full model parameter space
Active dendrites enhance neuronal dynamic range
Since the first experimental evidences of active conductances in dendrites,
most neurons have been shown to exhibit dendritic excitability through the
expression of a variety of voltage-gated ion channels. However, despite
experimental and theoretical efforts undertaken in the last decades, the role
of this excitability for some kind of dendritic computation has remained
elusive. Here we show that, owing to very general properties of excitable
media, the average output of a model of active dendritic trees is a highly
non-linear function of their afferent rate, attaining extremely large dynamic
ranges (above 50 dB). Moreover, the model yields double-sigmoid response
functions as experimentally observed in retinal ganglion cells. We claim that
enhancement of dynamic range is the primary functional role of active dendritic
conductances. We predict that neurons with larger dendritic trees should have
larger dynamic range and that blocking of active conductances should lead to a
decrease of dynamic range.Comment: 20 pages, 6 figure
Time-Warp–Invariant Neuronal Processing
A biophysical mechanism acting in auditory neurons allows the brain to process the high variability of speaking rates in natural speech in a time-warp-invariant manner