1,916 research outputs found
AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows
Given datasets from multiple domains, a key challenge is to efficiently
exploit these data sources for modeling a target domain. Variants of this
problem have been studied in many contexts, such as cross-domain translation
and domain adaptation. We propose AlignFlow, a generative modeling framework
that models each domain via a normalizing flow. The use of normalizing flows
allows for a) flexibility in specifying learning objectives via adversarial
training, maximum likelihood estimation, or a hybrid of the two methods; and b)
learning and exact inference of a shared representation in the latent space of
the generative model. We derive a uniform set of conditions under which
AlignFlow is marginally-consistent for the different learning objectives.
Furthermore, we show that AlignFlow guarantees exact cycle consistency in
mapping datapoints from a source domain to target and back to the source
domain. Empirically, AlignFlow outperforms relevant baselines on image-to-image
translation and unsupervised domain adaptation and can be used to
simultaneously interpolate across the various domains using the learned
representation.Comment: AAAI 202
Semi-Supervised Learning by Augmented Distribution Alignment
In this work, we propose a simple yet effective semi-supervised learning
approach called Augmented Distribution Alignment. We reveal that an essential
sampling bias exists in semi-supervised learning due to the limited number of
labeled samples, which often leads to a considerable empirical distribution
mismatch between labeled data and unlabeled data. To this end, we propose to
align the empirical distributions of labeled and unlabeled data to alleviate
the bias. On one hand, we adopt an adversarial training strategy to minimize
the distribution distance between labeled and unlabeled data as inspired by
domain adaptation works. On the other hand, to deal with the small sample size
issue of labeled data, we also propose a simple interpolation strategy to
generate pseudo training samples. Those two strategies can be easily
implemented into existing deep neural networks. We demonstrate the
effectiveness of our proposed approach on the benchmark SVHN and CIFAR10
datasets. Our code is available at \url{https://github.com/qinenergy/adanet}.Comment: To appear in ICCV 201
- …