52,640 research outputs found
The complexity of dynamics in small neural circuits
Mean-field theory is a powerful tool for studying large neural networks.
However, when the system is composed of a few neurons, macroscopic differences
between the mean-field approximation and the real behavior of the network can
arise. Here we introduce a study of the dynamics of a small firing-rate network
with excitatory and inhibitory populations, in terms of local and global
bifurcations of the neural activity. Our approach is analytically tractable in
many respects, and sheds new light on the finite-size effects of the system. In
particular, we focus on the formation of multiple branching solutions of the
neural equations through spontaneous symmetry-breaking, since this phenomenon
increases considerably the complexity of the dynamical behavior of the network.
For these reasons, branching points may reveal important mechanisms through
which neurons interact and process information, which are not accounted for by
the mean-field approximation.Comment: 34 pages, 11 figures. Supplementary materials added, colors of
figures 8 and 9 fixed, results unchange
Asymptotic behavior of memristive circuits
The interest in memristors has risen due to their possible application both
as memory units and as computational devices in combination with CMOS. This is
in part due to their nonlinear dynamics, and a strong dependence on the circuit
topology. We provide evidence that also purely memristive circuits can be
employed for computational purposes. In the present paper we show that a
polynomial Lyapunov function in the memory parameters exists for the case of DC
controlled memristors. Such Lyapunov function can be asymptotically
approximated with binary variables, and mapped to quadratic combinatorial
optimization problems. This also shows a direct parallel between memristive
circuits and the Hopfield-Little model. In the case of Erdos-Renyi random
circuits, we show numerically that the distribution of the matrix elements of
the projectors can be roughly approximated with a Gaussian distribution, and
that it scales with the inverse square root of the number of elements. This
provides an approximated but direct connection with the physics of disordered
system and, in particular, of mean field spin glasses. Using this and the fact
that the interaction is controlled by a projector operator on the loop space of
the circuit. We estimate the number of stationary points of the approximate
Lyapunov function and provide a scaling formula as an upper bound in terms of
the circuit topology only.Comment: 20 pages, 8 figures; proofs corrected, figures changed; results
substantially unchanged; to appear in Entrop
Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure
Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law
distribution of population event sizes with an exponent of -3/2. It has been
observed in the superficial layers of cortex both \emph{in vivo} and \emph{in
vitro}. In this paper we analyze the information transmission of a novel
self-organized neural network with active-neuron-dominant structure. Neuronal
avalanches can be observed in this network with appropriate input intensity. We
find that the process of network learning via spike-timing dependent plasticity
dramatically increases the complexity of network structure, which is finally
self-organized to be active-neuron-dominant connectivity. Both the entropy of
activity patterns and the complexity of their resulting post-synaptic inputs
are maximized when the network dynamics are propagated as neuronal avalanches.
This emergent topology is beneficial for information transmission with high
efficiency and also could be responsible for the large information capacity of
this network compared with alternative archetypal networks with different
neural connectivity.Comment: Non-final version submitted to Chao
Seven properties of self-organization in the human brain
The principle of self-organization has acquired a fundamental significance in the newly emerging field of computational philosophy. Self-organizing systems have been described in various domains in science and philosophy including physics, neuroscience, biology and medicine, ecology, and sociology. While system architecture and their general purpose may depend on domain-specific concepts and definitions, there are (at least) seven key properties of self-organization clearly identified in brain systems: 1) modular connectivity, 2) unsupervised learning, 3) adaptive ability, 4) functional resiliency, 5) functional plasticity, 6) from-local-to-global functional organization, and 7) dynamic system growth. These are defined here in the light of insight from neurobiology, cognitive neuroscience and Adaptive Resonance Theory (ART), and physics to show that self-organization achieves stability and functional plasticity while minimizing structural system complexity. A specific example informed by empirical research is discussed to illustrate how modularity, adaptive learning, and dynamic network growth enable stable yet plastic somatosensory representation for human grip force control. Implications for the design of “strong” artificial intelligence in robotics are brought forward
Emergent complex neural dynamics
A large repertoire of spatiotemporal activity patterns in the brain is the
basis for adaptive behaviour. Understanding the mechanism by which the brain's
hundred billion neurons and hundred trillion synapses manage to produce such a
range of cortical configurations in a flexible manner remains a fundamental
problem in neuroscience. One plausible solution is the involvement of universal
mechanisms of emergent complex phenomena evident in dynamical systems poised
near a critical point of a second-order phase transition. We review recent
theoretical and empirical results supporting the notion that the brain is
naturally poised near criticality, as well as its implications for better
understanding of the brain
Synchronous Behavior of Two Coupled Electronic Neurons
We report on experimental studies of synchronization phenomena in a pair of
analog electronic neurons (ENs). The ENs were designed to reproduce the
observed membrane voltage oscillations of isolated biological neurons from the
stomatogastric ganglion of the California spiny lobster Panulirus interruptus.
The ENs are simple analog circuits which integrate four dimensional
differential equations representing fast and slow subcellular mechanisms that
produce the characteristic regular/chaotic spiking-bursting behavior of these
cells. In this paper we study their dynamical behavior as we couple them in the
same configurations as we have done for their counterpart biological neurons.
The interconnections we use for these neural oscillators are both direct
electrical connections and excitatory and inhibitory chemical connections: each
realized by analog circuitry and suggested by biological examples. We provide
here quantitative evidence that the ENs and the biological neurons behave
similarly when coupled in the same manner. They each display well defined
bifurcations in their mutual synchronization and regularization. We report
briefly on an experiment on coupled biological neurons and four dimensional ENs
which provides further ground for testing the validity of our numerical and
electronic models of individual neural behavior. Our experiments as a whole
present interesting new examples of regularization and synchronization in
coupled nonlinear oscillators.Comment: 26 pages, 10 figure
- …