4,576 research outputs found
Improved Techniques for Adversarial Discriminative Domain Adaptation
Adversarial discriminative domain adaptation (ADDA) is an efficient framework
for unsupervised domain adaptation in image classification, where the source
and target domains are assumed to have the same classes, but no labels are
available for the target domain. We investigate whether we can improve
performance of ADDA with a new framework and new loss formulations. Following
the framework of semi-supervised GANs, we first extend the discriminator output
over the source classes, in order to model the joint distribution over domain
and task. We thus leverage on the distribution over the source encoder
posteriors (which is fixed during adversarial training) and propose maximum
mean discrepancy (MMD) and reconstruction-based loss functions for aligning the
target encoder distribution to the source domain. We compare and provide a
comprehensive analysis of how our framework and loss formulations extend over
simple multi-class extensions of ADDA and other discriminative variants of
semi-supervised GANs. In addition, we introduce various forms of regularization
for stabilizing training, including treating the discriminator as a denoising
autoencoder and regularizing the target encoder with source examples to reduce
overfitting under a contraction mapping (i.e., when the target per-class
distributions are contracting during alignment with the source). Finally, we
validate our framework on standard domain adaptation datasets, such as SVHN and
MNIST. We also examine how our framework benefits recognition problems based on
modalities that lack training data, by introducing and evaluating on a
neuromorphic vision sensing (NVS) sign language recognition dataset, where the
source and target domains constitute emulated and real neuromorphic spike
events respectively. Our results on all datasets show that our proposal
competes or outperforms the state-of-the-art in unsupervised domain adaptation.Comment: To appear in IEEE Transactions on Image Processin
Crossing Generative Adversarial Networks for Cross-View Person Re-identification
Person re-identification (\textit{re-id}) refers to matching pedestrians
across disjoint yet non-overlapping camera views. The most effective way to
match these pedestrians undertaking significant visual variations is to seek
reliably invariant features that can describe the person of interest
faithfully. Most of existing methods are presented in a supervised manner to
produce discriminative features by relying on labeled paired images in
correspondence. However, annotating pair-wise images is prohibitively expensive
in labors, and thus not practical in large-scale networked cameras. Moreover,
seeking comparable representations across camera views demands a flexible model
to address the complex distributions of images. In this work, we study the
co-occurrence statistic patterns between pairs of images, and propose to
crossing Generative Adversarial Network (Cross-GAN) for learning a joint
distribution for cross-image representations in a unsupervised manner. Given a
pair of person images, the proposed model consists of the variational
auto-encoder to encode the pair into respective latent variables, a proposed
cross-view alignment to reduce the view disparity, and an adversarial layer to
seek the joint distribution of latent representations. The learned latent
representations are well-aligned to reflect the co-occurrence patterns of
paired images. We empirically evaluate the proposed model against challenging
datasets, and our results show the importance of joint invariant features in
improving matching rates of person re-id with comparison to semi/unsupervised
state-of-the-arts.Comment: 12 pages. arXiv admin note: text overlap with arXiv:1702.03431 by
other author
- …