2,691 research outputs found
From source to target and back: symmetric bi-directional adaptive GAN
The effectiveness of generative adversarial approaches in producing images
according to a specific style or visual domain has recently opened new
directions to solve the unsupervised domain adaptation problem. It has been
shown that source labeled images can be modified to mimic target samples making
it possible to train directly a classifier in the target domain, despite the
original lack of annotated data. Inverse mappings from the target to the source
domain have also been evaluated but only passing through adapted feature
spaces, thus without new image generation. In this paper we propose to better
exploit the potential of generative adversarial networks for adaptation by
introducing a novel symmetric mapping among domains. We jointly optimize
bi-directional image transformations combining them with target self-labeling.
Moreover we define a new class consistency loss that aligns the generators in
the two directions imposing to conserve the class identity of an image passing
through both domain mappings. A detailed qualitative and quantitative analysis
of the reconstructed images confirm the power of our approach. By integrating
the two domain specific classifiers obtained with our bi-directional network we
exceed previous state-of-the-art unsupervised adaptation results on four
different benchmark datasets
Domain Conditioned Adaptation Network
Tremendous research efforts have been made to thrive deep domain adaptation
(DA) by seeking domain-invariant features. Most existing deep DA models only
focus on aligning feature representations of task-specific layers across
domains while integrating a totally shared convolutional architecture for
source and target. However, we argue that such strongly-shared convolutional
layers might be harmful for domain-specific feature learning when source and
target data distribution differs to a large extent. In this paper, we relax a
shared-convnets assumption made by previous DA methods and propose a Domain
Conditioned Adaptation Network (DCAN), which aims to excite distinct
convolutional channels with a domain conditioned channel attention mechanism.
As a result, the critical low-level domain-dependent knowledge could be
explored appropriately. As far as we know, this is the first work to explore
the domain-wise convolutional channel activation for deep DA networks.
Moreover, to effectively align high-level feature distributions across two
domains, we further deploy domain conditioned feature correction blocks after
task-specific layers, which will explicitly correct the domain discrepancy.
Extensive experiments on three cross-domain benchmarks demonstrate the proposed
approach outperforms existing methods by a large margin, especially on very
tough cross-domain learning tasks.Comment: Accepted by AAAI 202
- …