12,882 research outputs found
Unsupervised Learning by Competing Hidden Units
It is widely believed that the backpropagation algorithm is essential for
learning good feature detectors in early layers of artificial neural networks,
so that these detectors are useful for the task performed by the higher layers
of that neural network. At the same time, the traditional form of
backpropagation is biologically implausible. In the present paper we propose an
unusual learning rule, which has a degree of biological plausibility, and which
is motivated by Hebb's idea that change of the synapse strength should be local
- i.e. should depend only on the activities of the pre and post synaptic
neurons. We design a learning algorithm that utilizes global inhibition in the
hidden layer, and is capable of learning early feature detectors in a
completely unsupervised way. These learned lower layer feature detectors can be
used to train higher layer weights in a usual supervised way so that the
performance of the full network is comparable to the performance of standard
feedforward networks trained end-to-end with a backpropagation algorithm
Biologically plausible deep learning -- but how far can we go with shallow networks?
Training deep neural networks with the error backpropagation algorithm is
considered implausible from a biological perspective. Numerous recent
publications suggest elaborate models for biologically plausible variants of
deep learning, typically defining success as reaching around 98% test accuracy
on the MNIST data set. Here, we investigate how far we can go on digit (MNIST)
and object (CIFAR10) classification with biologically plausible, local learning
rules in a network with one hidden layer and a single readout layer. The hidden
layer weights are either fixed (random or random Gabor filters) or trained with
unsupervised methods (PCA, ICA or Sparse Coding) that can be implemented by
local learning rules. The readout layer is trained with a supervised, local
learning rule. We first implement these models with rate neurons. This
comparison reveals, first, that unsupervised learning does not lead to better
performance than fixed random projections or Gabor filters for large hidden
layers. Second, networks with localized receptive fields perform significantly
better than networks with all-to-all connectivity and can reach backpropagation
performance on MNIST. We then implement two of the networks - fixed, localized,
random & random Gabor filters in the hidden layer - with spiking leaky
integrate-and-fire neurons and spike timing dependent plasticity to train the
readout layer. These spiking models achieve > 98.2% test accuracy on MNIST,
which is close to the performance of rate networks with one hidden layer
trained with backpropagation. The performance of our shallow network models is
comparable to most current biologically plausible models of deep learning.
Furthermore, our results with a shallow spiking network provide an important
reference and suggest the use of datasets other than MNIST for testing the
performance of future models of biologically plausible deep learning.Comment: 14 pages, 4 figure
Adversarial Discriminative Domain Adaptation
Adversarial learning methods are a promising approach to training robust deep
networks, and can generate complex samples across diverse domains. They also
can improve recognition despite the presence of domain shift or dataset bias:
several adversarial approaches to unsupervised domain adaptation have recently
been introduced, which reduce the difference between the training and test
domain distributions and thus improve generalization performance. Prior
generative approaches show compelling visualizations, but are not optimal on
discriminative tasks and can be limited to smaller shifts. Prior discriminative
approaches could handle larger domain shifts, but imposed tied weights on the
model and did not exploit a GAN-based loss. We first outline a novel
generalized framework for adversarial adaptation, which subsumes recent
state-of-the-art approaches as special cases, and we use this generalized view
to better relate the prior approaches. We propose a previously unexplored
instance of our general framework which combines discriminative modeling,
untied weight sharing, and a GAN loss, which we call Adversarial Discriminative
Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably
simpler than competing domain-adversarial methods, and demonstrate the promise
of our approach by exceeding state-of-the-art unsupervised adaptation results
on standard cross-domain digit classification tasks and a new more difficult
cross-modality object classification task
- …