3,861 research outputs found
A review of domain adaptation without target labels
Domain adaptation has become a prominent problem setting in machine learning
and related fields. This review asks the question: how can a classifier learn
from a source domain and generalize to a target domain? We present a
categorization of approaches, divided into, what we refer to as, sample-based,
feature-based and inference-based methods. Sample-based methods focus on
weighting individual observations during training based on their importance to
the target domain. Feature-based methods revolve around on mapping, projecting
and representing features such that a source classifier performs well on the
target domain and inference-based methods incorporate adaptation into the
parameter estimation procedure, for instance through constraints on the
optimization procedure. Additionally, we review a number of conditions that
allow for formulating bounds on the cross-domain generalization error. Our
categorization highlights recurring ideas and raises questions important to
further research.Comment: 20 pages, 5 figure
From source to target and back: symmetric bi-directional adaptive GAN
The effectiveness of generative adversarial approaches in producing images
according to a specific style or visual domain has recently opened new
directions to solve the unsupervised domain adaptation problem. It has been
shown that source labeled images can be modified to mimic target samples making
it possible to train directly a classifier in the target domain, despite the
original lack of annotated data. Inverse mappings from the target to the source
domain have also been evaluated but only passing through adapted feature
spaces, thus without new image generation. In this paper we propose to better
exploit the potential of generative adversarial networks for adaptation by
introducing a novel symmetric mapping among domains. We jointly optimize
bi-directional image transformations combining them with target self-labeling.
Moreover we define a new class consistency loss that aligns the generators in
the two directions imposing to conserve the class identity of an image passing
through both domain mappings. A detailed qualitative and quantitative analysis
of the reconstructed images confirm the power of our approach. By integrating
the two domain specific classifiers obtained with our bi-directional network we
exceed previous state-of-the-art unsupervised adaptation results on four
different benchmark datasets
Incremental Unsupervised Domain-Adversarial Training of Neural Networks
In the context of supervised statistical learning, it is typically assumed
that the training set comes from the same distribution that draws the test
samples. When this is not the case, the behavior of the learned model is
unpredictable and becomes dependent upon the degree of similarity between the
distribution of the training set and the distribution of the test set. One of
the research topics that investigates this scenario is referred to as domain
adaptation. Deep neural networks brought dramatic advances in pattern
recognition and that is why there have been many attempts to provide good
domain adaptation algorithms for these models. Here we take a different avenue
and approach the problem from an incremental point of view, where the model is
adapted to the new domain iteratively. We make use of an existing unsupervised
domain-adaptation algorithm to identify the target samples on which there is
greater confidence about their true label. The output of the model is analyzed
in different ways to determine the candidate samples. The selected set is then
added to the source training set by considering the labels provided by the
network as ground truth, and the process is repeated until all target samples
are labelled. Our results report a clear improvement with respect to the
non-incremental case in several datasets, also outperforming other
state-of-the-art domain adaptation algorithms.Comment: 26 pages, 7 figure
- …