8,110 research outputs found
Unsupervised Domain Adaptation by Backpropagation
Top-performing deep architectures are trained on massive amounts of labeled
data. In the absence of labeled data for a certain task, domain adaptation
often provides an attractive option given that labeled data of similar nature
but from a different domain (e.g. synthetic images) are available. Here, we
propose a new approach to domain adaptation in deep architectures that can be
trained on large amount of labeled data from the source domain and large amount
of unlabeled data from the target domain (no labeled target-domain data is
necessary).
As the training progresses, the approach promotes the emergence of "deep"
features that are (i) discriminative for the main learning task on the source
domain and (ii) invariant with respect to the shift between the domains. We
show that this adaptation behaviour can be achieved in almost any feed-forward
model by augmenting it with few standard layers and a simple new gradient
reversal layer. The resulting augmented architecture can be trained using
standard backpropagation.
Overall, the approach can be implemented with little effort using any of the
deep-learning packages. The method performs very well in a series of image
classification experiments, achieving adaptation effect in the presence of big
domain shifts and outperforming previous state-of-the-art on Office datasets
Domain-Adversarial Training of Neural Networks
We introduce a new representation learning approach for domain adaptation, in
which data at training and test time come from similar but different
distributions. Our approach is directly inspired by the theory on domain
adaptation suggesting that, for effective domain transfer to be achieved,
predictions must be made based on features that cannot discriminate between the
training (source) and test (target) domains. The approach implements this idea
in the context of neural network architectures that are trained on labeled data
from the source domain and unlabeled data from the target domain (no labeled
target-domain data is necessary). As the training progresses, the approach
promotes the emergence of features that are (i) discriminative for the main
learning task on the source domain and (ii) indiscriminate with respect to the
shift between the domains. We show that this adaptation behaviour can be
achieved in almost any feed-forward model by augmenting it with few standard
layers and a new gradient reversal layer. The resulting augmented architecture
can be trained using standard backpropagation and stochastic gradient descent,
and can thus be implemented with little effort using any of the deep learning
packages. We demonstrate the success of our approach for two distinct
classification problems (document sentiment analysis and image classification),
where state-of-the-art domain adaptation performance on standard benchmarks is
achieved. We also validate the approach for descriptor learning task in the
context of person re-identification application.Comment: Published in JMLR: http://jmlr.org/papers/v17/15-239.htm
Training Multi-layer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation
Spiking neural networks (SNNs) have garnered a great amount of interest for
supervised and unsupervised learning applications. This paper deals with the
problem of training multi-layer feedforward SNNs. The non-linear
integrate-and-fire dynamics employed by spiking neurons make it difficult to
train SNNs to generate desired spike trains in response to a given input. To
tackle this, first the problem of training a multi-layer SNN is formulated as
an optimization problem such that its objective function is based on the
deviation in membrane potential rather than the spike arrival instants. Then,
an optimization method named Normalized Approximate Descent (NormAD),
hand-crafted for such non-convex optimization problems, is employed to derive
the iterative synaptic weight update rule. Next, it is reformulated to
efficiently train multi-layer SNNs, and is shown to be effectively performing
spatio-temporal error backpropagation. The learning rule is validated by
training -layer SNNs to solve a spike based formulation of the XOR problem
as well as training -layer SNNs for generic spike based training problems.
Thus, the new algorithm is a key step towards building deep spiking neural
networks capable of efficient event-triggered learning.Comment: 19 pages, 10 figure
- …