47 research outputs found
Adversarial Domain Adaptation with Domain Mixup
Recent works on domain adaptation reveal the effectiveness of adversarial
learning on filling the discrepancy between source and target domains. However,
two common limitations exist in current adversarial-learning-based methods.
First, samples from two domains alone are not sufficient to ensure
domain-invariance at most part of latent space. Second, the domain
discriminator involved in these methods can only judge real or fake with the
guidance of hard label, while it is more reasonable to use soft scores to
evaluate the generated images or features, i.e., to fully utilize the
inter-domain information. In this paper, we present adversarial domain
adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a
more continuous latent space and guides the domain discriminator in judging
samples' difference relative to source and target domains. Domain mixup is
jointly conducted on pixel and feature level to improve the robustness of
models. Extensive experiments prove that the proposed approach can achieve
superior performance on tasks with various degrees of domain shift and data
complexity.Comment: Accepted as oral presentation at 34th AAAI Conference on Artificial
Intelligence, 202
Self-training Guided Adversarial Domain Adaptation For Thermal Imagery
Deep models trained on large-scale RGB image datasets have shown tremendous
success. It is important to apply such deep models to real-world problems.
However, these models suffer from a performance bottleneck under illumination
changes. Thermal IR cameras are more robust against such changes, and thus can
be very useful for the real-world problems. In order to investigate efficacy of
combining feature-rich visible spectrum and thermal image modalities, we
propose an unsupervised domain adaptation method which does not require
RGB-to-thermal image pairs. We employ large-scale RGB dataset MS-COCO as source
domain and thermal dataset FLIR ADAS as target domain to demonstrate results of
our method. Although adversarial domain adaptation methods aim to align the
distributions of source and target domains, simply aligning the distributions
cannot guarantee perfect generalization to the target domain. To this end, we
propose a self-training guided adversarial domain adaptation method to promote
generalization capabilities of adversarial domain adaptation methods. To
perform self-training, pseudo labels are assigned to the samples on the target
thermal domain to learn more generalized representations for the target domain.
Extensive experimental analyses show that our proposed method achieves better
results than the state-of-the-art adversarial domain adaptation methods. The
code and models are publicly available.Comment: Accepted to CVPR 2021 Perception Beyond the Visible Spectrum (PBVS)
worksho