28,525 research outputs found
Unsupervised Domain Adaptation with Similarity Learning
The objective of unsupervised domain adaptation is to leverage features from
a labeled source domain and learn a classifier for an unlabeled target domain,
with a similar but different data distribution. Most deep learning approaches
to domain adaptation consist of two steps: (i) learn features that preserve a
low risk on labeled samples (source domain) and (ii) make the features from
both domains to be as indistinguishable as possible, so that a classifier
trained on the source can also be applied on the target domain. In general, the
classifiers in step (i) consist of fully-connected layers applied directly on
the indistinguishable features learned in (ii). In this paper, we propose a
different way to do the classification, using similarity learning. The proposed
method learns a pairwise similarity function in which classification can be
performed by computing similarity between prototype representations of each
category. The domain-invariant features and the categorical prototype
representations are learned jointly and in an end-to-end fashion. At inference
time, images from the target domain are compared to the prototypes and the
label associated with the one that best matches the image is outputed. The
approach is simple, scalable and effective. We show that our model achieves
state-of-the-art performance in different unsupervised domain adaptation
scenarios
Stochastic Adversarial Gradient Embedding for Active Domain Adaptation
Unsupervised Domain Adaptation (UDA) aims to bridge the gap between a source
domain, where labelled data are available, and a target domain only represented
with unlabelled data. If domain invariant representations have dramatically
improved the adaptability of models, to guarantee their good transferability
remains a challenging problem. This paper addresses this problem by using
active learning to annotate a small budget of target data. Although this setup,
called Active Domain Adaptation (ADA), deviates from UDA's standard setup, a
wide range of practical applications are faced with this situation. To this
purpose, we introduce \textit{Stochastic Adversarial Gradient Embedding}
(SAGE), a framework that makes a triple contribution to ADA. First, we select
for annotation target samples that are likely to improve the representations'
transferability by measuring the variation, before and after annotation, of the
transferability loss gradient. Second, we increase sampling diversity by
promoting different gradient directions. Third, we introduce a novel training
procedure for actively incorporating target samples when learning invariant
representations. SAGE is based on solid theoretical ground and validated on
various UDA benchmarks against several baselines. Our empirical investigation
demonstrates that SAGE takes the best of uncertainty \textit{vs} diversity
samplings and improves representations transferability substantially
- …