614 research outputs found
Equivalence of Equilibrium Propagation and Recurrent Backpropagation
Recurrent Backpropagation and Equilibrium Propagation are supervised learning
algorithms for fixed point recurrent neural networks which differ in their
second phase. In the first phase, both algorithms converge to a fixed point
which corresponds to the configuration where the prediction is made. In the
second phase, Equilibrium Propagation relaxes to another nearby fixed point
corresponding to smaller prediction error, whereas Recurrent Backpropagation
uses a side network to compute error derivatives iteratively. In this work we
establish a close connection between these two algorithms. We show that, at
every moment in the second phase, the temporal derivatives of the neural
activities in Equilibrium Propagation are equal to the error derivatives
computed iteratively by Recurrent Backpropagation in the side network. This
work shows that it is not required to have a side network for the computation
of error derivatives, and supports the hypothesis that, in biological neural
networks, temporal derivatives of neural activities may code for error signals
Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation
We introduce Equilibrium Propagation, a learning framework for energy-based
models. It involves only one kind of neural computation, performed in both the
first phase (when the prediction is made) and the second phase of training
(after the target or prediction error is revealed). Although this algorithm
computes the gradient of an objective function just like Backpropagation, it
does not need a special computation or circuit for the second phase, where
errors are implicitly propagated. Equilibrium Propagation shares similarities
with Contrastive Hebbian Learning and Contrastive Divergence while solving the
theoretical issues of both algorithms: our algorithm computes the gradient of a
well defined objective function. Because the objective function is defined in
terms of local perturbations, the second phase of Equilibrium Propagation
corresponds to only nudging the prediction (fixed point, or stationary
distribution) towards a configuration that reduces prediction error. In the
case of a recurrent multi-layer supervised network, the output units are
slightly nudged towards their target in the second phase, and the perturbation
introduced at the output layer propagates backward in the hidden layers. We
show that the signal 'back-propagated' during this second phase corresponds to
the propagation of error derivatives and encodes the gradient of the objective
function, when the synaptic update corresponds to a standard form of
spike-timing dependent plasticity. This work makes it more plausible that a
mechanism similar to Backpropagation could be implemented by brains, since
leaky integrator neural computation performs both inference and error
back-propagation in our model. The only local difference between the two phases
is whether synaptic changes are allowed or not
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
Bidirectional Learning in Recurrent Neural Networks Using Equilibrium Propagation
Neurobiologically-plausible learning algorithms for recurrent neural networks that can perform supervised learning are a neglected area of study. Equilibrium propagation is a recent synthesis of several ideas in biological and artificial neural network research that uses a continuous-time, energy-based neural model with a local learning rule. However, despite dealing with recurrent networks, equilibrium propagation has only been applied to discriminative categorization tasks. This thesis generalizes equilibrium propagation to bidirectional learning with asymmetric weights. Simultaneously learning the discriminative as well as generative transformations for a set of data points and their corresponding category labels, bidirectional equilibrium propagation utilizes recurrence and weight asymmetry to share related but non-identical representations within the network. Experiments on an artificial dataset demonstrate the ability to learn both transformations, as well as the ability for asymmetric-weight networks to generalize their discriminative training to the untrained generative task
- …