13,805 research outputs found
Maximum Classifier Discrepancy for Unsupervised Domain Adaptation
In this work, we present a method for unsupervised domain adaptation. Many
adversarial learning methods train domain classifier networks to distinguish
the features as either a source or target and train a feature generator network
to mimic the discriminator. Two problems exist with these methods. First, the
domain classifier only tries to distinguish the features as a source or target
and thus does not consider task-specific decision boundaries between classes.
Therefore, a trained generator can generate ambiguous features near class
boundaries. Second, these methods aim to completely match the feature
distributions between different domains, which is difficult because of each
domain's characteristics.
To solve these problems, we introduce a new approach that attempts to align
distributions of source and target by utilizing the task-specific decision
boundaries. We propose to maximize the discrepancy between two classifiers'
outputs to detect target samples that are far from the support of the source. A
feature generator learns to generate target features near the support to
minimize the discrepancy. Our method outperforms other methods on several
datasets of image classification and semantic segmentation. The codes are
available at \url{https://github.com/mil-tokyo/MCD_DA}Comment: Accepted to CVPR2018 Oral, Code is available at
https://github.com/mil-tokyo/MCD_D
Domain Conditioned Adaptation Network
Tremendous research efforts have been made to thrive deep domain adaptation
(DA) by seeking domain-invariant features. Most existing deep DA models only
focus on aligning feature representations of task-specific layers across
domains while integrating a totally shared convolutional architecture for
source and target. However, we argue that such strongly-shared convolutional
layers might be harmful for domain-specific feature learning when source and
target data distribution differs to a large extent. In this paper, we relax a
shared-convnets assumption made by previous DA methods and propose a Domain
Conditioned Adaptation Network (DCAN), which aims to excite distinct
convolutional channels with a domain conditioned channel attention mechanism.
As a result, the critical low-level domain-dependent knowledge could be
explored appropriately. As far as we know, this is the first work to explore
the domain-wise convolutional channel activation for deep DA networks.
Moreover, to effectively align high-level feature distributions across two
domains, we further deploy domain conditioned feature correction blocks after
task-specific layers, which will explicitly correct the domain discrepancy.
Extensive experiments on three cross-domain benchmarks demonstrate the proposed
approach outperforms existing methods by a large margin, especially on very
tough cross-domain learning tasks.Comment: Accepted by AAAI 202
Domain Generalization by Solving Jigsaw Puzzles
Human adaptability relies crucially on the ability to learn and merge
knowledge both from supervised and unsupervised learning: the parents point out
few important concepts, but then the children fill in the gaps on their own.
This is particularly effective, because supervised learning can never be
exhaustive and thus learning autonomously allows to discover invariances and
regularities that help to generalize. In this paper we propose to apply a
similar approach to the task of object recognition across domains: our model
learns the semantic labels in a supervised fashion, and broadens its
understanding of the data by learning from self-supervised signals how to solve
a jigsaw puzzle on the same images. This secondary task helps the network to
learn the concepts of spatial correlation while acting as a regularizer for the
classification task. Multiple experiments on the PACS, VLCS, Office-Home and
digits datasets confirm our intuition and show that this simple method
outperforms previous domain generalization and adaptation solutions. An
ablation study further illustrates the inner workings of our approach.Comment: Accepted at CVPR 2019 (oral
- …