2,837 research outputs found
A Biologically Plausible Learning Rule for Deep Learning in the Brain
Researchers have proposed that deep learning, which is providing important
progress in a wide range of high complexity tasks, might inspire new insights
into learning in the brain. However, the methods used for deep learning by
artificial neural networks are biologically unrealistic and would need to be
replaced by biologically realistic counterparts. Previous biologically
plausible reinforcement learning rules, like AGREL and AuGMEnT, showed
promising results but focused on shallow networks with three layers. Will these
learning rules also generalize to networks with more layers and can they handle
tasks of higher complexity? We demonstrate the learning scheme on classical and
hard image-classification benchmarks, namely MNIST, CIFAR10 and CIFAR100, cast
as direct reward tasks, both for fully connected, convolutional and locally
connected architectures. We show that our learning rule - Q-AGREL - performs
comparably to supervised learning via error-backpropagation, with this type of
trial-and-error reinforcement learning requiring only 1.5-2.5 times more
epochs, even when classifying 100 different classes as in CIFAR100. Our results
provide new insights into how deep learning may be implemented in the brain
Deep supervised learning using local errors
Error backpropagation is a highly effective mechanism for learning
high-quality hierarchical features in deep networks. Updating the features or
weights in one layer, however, requires waiting for the propagation of error
signals from higher layers. Learning using delayed and non-local errors makes
it hard to reconcile backpropagation with the learning mechanisms observed in
biological neural networks as it requires the neurons to maintain a memory of
the input long enough until the higher-layer errors arrive. In this paper, we
propose an alternative learning mechanism where errors are generated locally in
each layer using fixed, random auxiliary classifiers. Lower layers could thus
be trained independently of higher layers and training could either proceed
layer by layer, or simultaneously in all layers using local error information.
We address biological plausibility concerns such as weight symmetry
requirements and show that the proposed learning mechanism based on fixed,
broad, and random tuning of each neuron to the classification categories
outperforms the biologically-motivated feedback alignment learning technique on
the MNIST, CIFAR10, and SVHN datasets, approaching the performance of standard
backpropagation. Our approach highlights a potential biological mechanism for
the supervised, or task-dependent, learning of feature hierarchies. In
addition, we show that it is well suited for learning deep networks in custom
hardware where it can drastically reduce memory traffic and data communication
overheads
- …