3 research outputs found
Class-imbalanced Domain Adaptation: An Empirical Odyssey
Unsupervised domain adaptation is a promising way to generalize deep models
to novel domains. However, the current literature assumes that the label
distribution is domain-invariant and only aligns the feature distributions or
vice versa. In this work, we explore the more realistic task of
Class-imbalanced Domain Adaptation: How to align feature distributions across
domains while the label distributions of the two domains are also different?
Taking a practical step towards this problem, we constructed the first
benchmark with 22 cross-domain tasks from 6real-image datasets. We conducted
comprehensive experiments on 10 recent domain adaptation methods and find most
of them are very fragile in the face of coexisting feature and label
distribution shift. Towards a better solution, we further proposed a feature
and label distribution CO-ALignment (COAL) model with a novel combination of
existing ideas. COAL is empirically shown to outperform the most recent domain
adaptation methods on our benchmarks. We believe the provided benchmarks,
empirical analysis results, and the COAL baseline could stimulate and
facilitate future research towards this important problem.Comment: ECCV 2020 Workshops - TASK-CV 202
Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation
We present an approach for unsupervised domain adaptation---with a strong
focus on practical considerations of within-domain class imbalance and
between-domain class distribution shift---from a class-conditioned domain
alignment perspective. Current methods for class-conditioned domain alignment
aim to explicitly minimize a loss function based on pseudo-label estimations of
the target domain. However, these methods suffer from pseudo-label bias in the
form of error accumulation. We propose a method that removes the need for
explicit optimization of model parameters from pseudo-labels directly. Instead,
we present a sampling-based implicit alignment approach, where the sample
selection procedure is implicitly guided by the pseudo-labels. Theoretical
analysis reveals the existence of a domain-discriminator shortcut in misaligned
classes, which is addressed by the proposed implicit alignment approach to
facilitate domain-adversarial learning. Empirical results and ablation studies
confirm the effectiveness of the proposed approach, especially in the presence
of within-domain class imbalance and between-domain class distribution shift.Comment: Accepted at ICML2020. For code, see
https://github.com/xiangdal/implicit_alignmen
Beyond -Divergence: Domain Adaptation Theory With Jensen-Shannon Divergence
We reveal the incoherence between the widely-adopted empirical domain
adversarial training and its generally-assumed theoretical counterpart based on
-divergence. Concretely, we find that -divergence is
not equivalent to Jensen-Shannon divergence, the optimization objective in
domain adversarial training. To this end, we establish a new theoretical
framework by directly proving the upper and lower target risk bounds based on
joint distributional Jensen-Shannon divergence. We further derive
bi-directional upper bounds for marginal and conditional shifts. Our framework
exhibits inherent flexibilities for different transfer learning problems, which
is usable for various scenarios where -divergence-based theory
fails to adapt. From an algorithmic perspective, our theory enables a generic
guideline unifying principles of semantic conditional matching, feature
marginal matching, and label marginal shift correction. We employ algorithms
for each principle and empirically validate the benefits of our framework on
real datasets