222,848 research outputs found
Revealing networks from dynamics: an introduction
What can we learn from the collective dynamics of a complex network about its
interaction topology? Taking the perspective from nonlinear dynamics, we
briefly review recent progress on how to infer structural connectivity (direct
interactions) from accessing the dynamics of the units. Potential applications
range from interaction networks in physics, to chemical and metabolic
reactions, protein and gene regulatory networks as well as neural circuits in
biology and electric power grids or wireless sensor networks in engineering.
Moreover, we briefly mention some standard ways of inferring effective or
functional connectivity.Comment: Topical review, 48 pages, 7 figure
A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks
We present a mathematical analysis of the effects of Hebbian learning in
random recurrent neural networks, with a generic Hebbian learning rule
including passive forgetting and different time scales for neuronal activity
and learning dynamics. Previous numerical works have reported that Hebbian
learning drives the system from chaos to a steady state through a sequence of
bifurcations. Here, we interpret these results mathematically and show that
these effects, involving a complex coupling between neuronal dynamics and
synaptic graph structure, can be analyzed using Jacobian matrices, which
introduce both a structural and a dynamical point of view on the neural network
evolution. Furthermore, we show that the sensitivity to a learned pattern is
maximal when the largest Lyapunov exponent is close to 0. We discuss how neural
networks may take advantage of this regime of high functional interest
Adaptive self-organization in a realistic neural network model
Information processing in complex systems is often found to be maximally
efficient close to critical states associated with phase transitions. It is
therefore conceivable that also neural information processing operates close to
criticality. This is further supported by the observation of power-law
distributions, which are a hallmark of phase transitions. An important open
question is how neural networks could remain close to a critical point while
undergoing a continual change in the course of development, adaptation,
learning, and more. An influential contribution was made by Bornholdt and
Rohlf, introducing a generic mechanism of robust self-organized criticality in
adaptive networks. Here, we address the question whether this mechanism is
relevant for real neural networks. We show in a realistic model that
spike-time-dependent synaptic plasticity can self-organize neural networks
robustly toward criticality. Our model reproduces several empirical
observations and makes testable predictions on the distribution of synaptic
strength, relating them to the critical state of the network. These results
suggest that the interplay between dynamics and topology may be essential for
neural information processing.Comment: 6 pages, 4 figure
- …