46 research outputs found
Maximum Classifier Discrepancy for Unsupervised Domain Adaptation
In this work, we present a method for unsupervised domain adaptation. Many
adversarial learning methods train domain classifier networks to distinguish
the features as either a source or target and train a feature generator network
to mimic the discriminator. Two problems exist with these methods. First, the
domain classifier only tries to distinguish the features as a source or target
and thus does not consider task-specific decision boundaries between classes.
Therefore, a trained generator can generate ambiguous features near class
boundaries. Second, these methods aim to completely match the feature
distributions between different domains, which is difficult because of each
domain's characteristics.
To solve these problems, we introduce a new approach that attempts to align
distributions of source and target by utilizing the task-specific decision
boundaries. We propose to maximize the discrepancy between two classifiers'
outputs to detect target samples that are far from the support of the source. A
feature generator learns to generate target features near the support to
minimize the discrepancy. Our method outperforms other methods on several
datasets of image classification and semantic segmentation. The codes are
available at \url{https://github.com/mil-tokyo/MCD_DA}Comment: Accepted to CVPR2018 Oral, Code is available at
https://github.com/mil-tokyo/MCD_D
Open Set Domain Adaptation by Backpropagation
Numerous algorithms have been proposed for transferring knowledge from a
label-rich domain (source) to a label-scarce domain (target). Almost all of
them are proposed for a closed-set scenario, where the source and the target
domain completely share the class of their samples. We call the shared class
the \doublequote{known class.} However, in practice, when samples in target
domain are not labeled, we cannot know whether the domains share the class. A
target domain can contain samples of classes that are not shared by the source
domain. We call such classes the \doublequote{unknown class} and algorithms
that work well in the open set situation are very practical. However, most
existing distribution matching methods for domain adaptation do not work well
in this setting because unknown target samples should not be aligned with the
source.
In this paper, we propose a method for an open set domain adaptation scenario
which utilizes adversarial training. A classifier is trained to make a boundary
between the source and the target samples whereas a generator is trained to
make target samples far from the boundary. Thus, we assign two options to the
feature generator: aligning them with source known samples or rejecting them as
unknown target samples. This approach allows extracting features that separate
unknown target samples from known target samples. Our method was extensively
evaluated in domain adaptation setting and outperformed other methods with a
large margin in most settings.Comment: Accepted by ECCV201