1,222 research outputs found
A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks
We present a mathematical analysis of the effects of Hebbian learning in
random recurrent neural networks, with a generic Hebbian learning rule
including passive forgetting and different time scales for neuronal activity
and learning dynamics. Previous numerical works have reported that Hebbian
learning drives the system from chaos to a steady state through a sequence of
bifurcations. Here, we interpret these results mathematically and show that
these effects, involving a complex coupling between neuronal dynamics and
synaptic graph structure, can be analyzed using Jacobian matrices, which
introduce both a structural and a dynamical point of view on the neural network
evolution. Furthermore, we show that the sensitivity to a learned pattern is
maximal when the largest Lyapunov exponent is close to 0. We discuss how neural
networks may take advantage of this regime of high functional interest
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation
This paper addresses two questions in the context of neuronal networks
dynamics, using methods from dynamical systems theory and statistical physics:
(i) How to characterize the statistical properties of sequences of action
potentials ("spike trains") produced by neuronal networks ? and; (ii) what are
the effects of synaptic plasticity on these statistics ? We introduce a
framework in which spike trains are associated to a coding of membrane
potential trajectories, and actually, constitute a symbolic coding in important
explicit examples (the so-called gIF models). On this basis, we use the
thermodynamic formalism from ergodic theory to show how Gibbs distributions are
natural probability measures to describe the statistics of spike trains, given
the empirical averages of prescribed quantities. As a second result, we show
that Gibbs distributions naturally arise when considering "slow" synaptic
plasticity rules where the characteristic time for synapse adaptation is quite
longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure
On Dynamics of Integrate-and-Fire Neural Networks with Conductance Based Synapses
We present a mathematical analysis of a networks with Integrate-and-Fire
neurons and adaptive conductances. Taking into account the realistic fact that
the spike time is only known within some \textit{finite} precision, we propose
a model where spikes are effective at times multiple of a characteristic time
scale , where can be \textit{arbitrary} small (in particular,
well beyond the numerical precision). We make a complete mathematical
characterization of the model-dynamics and obtain the following results. The
asymptotic dynamics is composed by finitely many stable periodic orbits, whose
number and period can be arbitrary large and can diverge in a region of the
synaptic weights space, traditionally called the "edge of chaos", a notion
mathematically well defined in the present paper. Furthermore, except at the
edge of chaos, there is a one-to-one correspondence between the membrane
potential trajectories and the raster plot. This shows that the neural code is
entirely "in the spikes" in this case. As a key tool, we introduce an order
parameter, easy to compute numerically, and closely related to a natural notion
of entropy, providing a relevant characterization of the computational
capabilities of the network. This allows us to compare the computational
capabilities of leaky and Integrate-and-Fire models and conductance based
models. The present study considers networks with constant input, and without
time-dependent plasticity, but the framework has been designed for both
extensions.Comment: 36 pages, 9 figure
Statistics of spikes trains, synaptic plasticity and Gibbs distributions.
ISBN : 978-2-9532965-0-1We introduce a mathematical framework where the statistics of spikes trains, produced by neural networks evolving under synaptic plasticity, can be analysed
Intrinsic adaptation in autonomous recurrent neural networks
A massively recurrent neural network responds on one side to input stimuli
and is autonomously active, on the other side, in the absence of sensory
inputs. Stimuli and information processing depends crucially on the qualia of
the autonomous-state dynamics of the ongoing neural activity. This default
neural activity may be dynamically structured in time and space, showing
regular, synchronized, bursting or chaotic activity patterns.
We study the influence of non-synaptic plasticity on the default dynamical
state of recurrent neural networks. The non-synaptic adaption considered acts
on intrinsic neural parameters, such as the threshold and the gain, and is
driven by the optimization of the information entropy. We observe, in the
presence of the intrinsic adaptation processes, three distinct and globally
attracting dynamical regimes, a regular synchronized, an overall chaotic and an
intermittent bursting regime. The intermittent bursting regime is characterized
by intervals of regular flows, which are quite insensitive to external stimuli,
interseeded by chaotic bursts which respond sensitively to input signals. We
discuss these finding in the context of self-organized information processing
and critical brain dynamics.Comment: 24 pages, 8 figure
Reward-modulated Hebbian plasticity as leverage for partially embodied control in compliant robotics
In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics
- …