1,665 research outputs found
A review of domain adaptation without target labels
Domain adaptation has become a prominent problem setting in machine learning
and related fields. This review asks the question: how can a classifier learn
from a source domain and generalize to a target domain? We present a
categorization of approaches, divided into, what we refer to as, sample-based,
feature-based and inference-based methods. Sample-based methods focus on
weighting individual observations during training based on their importance to
the target domain. Feature-based methods revolve around on mapping, projecting
and representing features such that a source classifier performs well on the
target domain and inference-based methods incorporate adaptation into the
parameter estimation procedure, for instance through constraints on the
optimization procedure. Additionally, we review a number of conditions that
allow for formulating bounds on the cross-domain generalization error. Our
categorization highlights recurring ideas and raises questions important to
further research.Comment: 20 pages, 5 figure
Optimal Transport for Domain Adaptation
Domain adaptation from one data space (or domain) to another is one of the
most challenging tasks of modern data analytics. If the adaptation is done
correctly, models built on a specific data space become more robust when
confronted to data depicting the same semantic concepts (the classes), but
observed by another observation system with its own specificities. Among the
many strategies proposed to adapt a domain to another, finding a common
representation has shown excellent properties: by finding a common
representation for both domains, a single classifier can be effective in both
and use labelled samples from the source domain to predict the unlabelled
samples of the target domain. In this paper, we propose a regularized
unsupervised optimal transportation model to perform the alignment of the
representations in the source and target domains. We learn a transportation
plan matching both PDFs, which constrains labelled samples in the source domain
to remain close during transport. This way, we exploit at the same time the few
labeled information in the source and the unlabelled distributions observed in
both domains. Experiments in toy and challenging real visual adaptation
examples show the interest of the method, that consistently outperforms state
of the art approaches
Adversarial Discriminative Domain Adaptation
Adversarial learning methods are a promising approach to training robust deep
networks, and can generate complex samples across diverse domains. They also
can improve recognition despite the presence of domain shift or dataset bias:
several adversarial approaches to unsupervised domain adaptation have recently
been introduced, which reduce the difference between the training and test
domain distributions and thus improve generalization performance. Prior
generative approaches show compelling visualizations, but are not optimal on
discriminative tasks and can be limited to smaller shifts. Prior discriminative
approaches could handle larger domain shifts, but imposed tied weights on the
model and did not exploit a GAN-based loss. We first outline a novel
generalized framework for adversarial adaptation, which subsumes recent
state-of-the-art approaches as special cases, and we use this generalized view
to better relate the prior approaches. We propose a previously unexplored
instance of our general framework which combines discriminative modeling,
untied weight sharing, and a GAN loss, which we call Adversarial Discriminative
Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably
simpler than competing domain-adversarial methods, and demonstrate the promise
of our approach by exceeding state-of-the-art unsupervised adaptation results
on standard cross-domain digit classification tasks and a new more difficult
cross-modality object classification task
Joint Distribution Optimal Transportation for Domain Adaptation
This paper deals with the unsupervised domain adaptation problem, where one
wants to estimate a prediction function in a given target domain without
any labeled sample by exploiting the knowledge available from a source domain
where labels are known. Our work makes the following assumption: there exists a
non-linear transformation between the joint feature/label space distributions
of the two domain and . We propose a solution of
this problem with optimal transport, that allows to recover an estimated target
by optimizing simultaneously the optimal coupling
and . We show that our method corresponds to the minimization of a bound on
the target error, and provide an efficient algorithmic solution, for which
convergence is proved. The versatility of our approach, both in terms of class
of hypothesis or loss functions is demonstrated with real world classification
and regression problems, for which we reach or surpass state-of-the-art
results.Comment: Accepted for publication at NIPS 201
Joint cross-domain classification and subspace learning for unsupervised adaptation
Domain adaptation aims at adapting the knowledge acquired on a source domain
to a new different but related target domain. Several approaches have
beenproposed for classification tasks in the unsupervised scenario, where no
labeled target data are available. Most of the attention has been dedicated to
searching a new domain-invariant representation, leaving the definition of the
prediction function to a second stage. Here we propose to learn both jointly.
Specifically we learn the source subspace that best matches the target subspace
while at the same time minimizing a regularized misclassification loss. We
provide an alternating optimization technique based on stochastic sub-gradient
descent to solve the learning problem and we demonstrate its performance on
several domain adaptation tasks.Comment: Paper is under consideration at Pattern Recognition Letter
- …