329 research outputs found
Semi-Supervised Learning by Augmented Distribution Alignment
In this work, we propose a simple yet effective semi-supervised learning
approach called Augmented Distribution Alignment. We reveal that an essential
sampling bias exists in semi-supervised learning due to the limited number of
labeled samples, which often leads to a considerable empirical distribution
mismatch between labeled data and unlabeled data. To this end, we propose to
align the empirical distributions of labeled and unlabeled data to alleviate
the bias. On one hand, we adopt an adversarial training strategy to minimize
the distribution distance between labeled and unlabeled data as inspired by
domain adaptation works. On the other hand, to deal with the small sample size
issue of labeled data, we also propose a simple interpolation strategy to
generate pseudo training samples. Those two strategies can be easily
implemented into existing deep neural networks. We demonstrate the
effectiveness of our proposed approach on the benchmark SVHN and CIFAR10
datasets. Our code is available at \url{https://github.com/qinenergy/adanet}.Comment: To appear in ICCV 201
Select, Label, and Mix: Learning Discriminative Invariant Feature Representations for Partial Domain Adaptation
Partial domain adaptation which assumes that the unknown target label space
is a subset of the source label space has attracted much attention in computer
vision. Despite recent progress, existing methods often suffer from three key
problems: negative transfer, lack of discriminability and domain invariance in
the latent space. To alleviate the above issues, we develop a novel 'Select,
Label, and Mix' (SLM) framework that aims to learn discriminative invariant
feature representations for partial domain adaptation. First, we present a
simple yet efficient "select" module that automatically filters out the outlier
source samples to avoid negative transfer while aligning distributions across
both domains. Second, the "label" module iteratively trains the classifier
using both the labeled source domain data and the generated pseudo-labels for
the target domain to enhance the discriminability of the latent space. Finally,
the "mix" module utilizes domain mixup regularization jointly with the other
two modules to explore more intrinsic structures across domains leading to a
domain-invariant latent space for partial domain adaptation. Extensive
experiments on several benchmark datasets demonstrate the superiority of our
proposed framework over state-of-the-art methods
- …