1,285 research outputs found
AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows
Given datasets from multiple domains, a key challenge is to efficiently
exploit these data sources for modeling a target domain. Variants of this
problem have been studied in many contexts, such as cross-domain translation
and domain adaptation. We propose AlignFlow, a generative modeling framework
that models each domain via a normalizing flow. The use of normalizing flows
allows for a) flexibility in specifying learning objectives via adversarial
training, maximum likelihood estimation, or a hybrid of the two methods; and b)
learning and exact inference of a shared representation in the latent space of
the generative model. We derive a uniform set of conditions under which
AlignFlow is marginally-consistent for the different learning objectives.
Furthermore, we show that AlignFlow guarantees exact cycle consistency in
mapping datapoints from a source domain to target and back to the source
domain. Empirically, AlignFlow outperforms relevant baselines on image-to-image
translation and unsupervised domain adaptation and can be used to
simultaneously interpolate across the various domains using the learned
representation.Comment: AAAI 202
Detach and Adapt: Learning Cross-Domain Disentangled Deep Representation
While representation learning aims to derive interpretable features for
describing visual data, representation disentanglement further results in such
features so that particular image attributes can be identified and manipulated.
However, one cannot easily address this task without observing ground truth
annotation for the training data. To address this problem, we propose a novel
deep learning model of Cross-Domain Representation Disentangler (CDRD). By
observing fully annotated source-domain data and unlabeled target-domain data
of interest, our model bridges the information across data domains and
transfers the attribute information accordingly. Thus, cross-domain joint
feature disentanglement and adaptation can be jointly performed. In the
experiments, we provide qualitative results to verify our disentanglement
capability. Moreover, we further confirm that our model can be applied for
solving classification tasks of unsupervised domain adaptation, and performs
favorably against state-of-the-art image disentanglement and translation
methods.Comment: CVPR 2018 Spotligh
A Novel BiLevel Paradigm for Image-to-Image Translation
Image-to-image (I2I) translation is a pixel-level mapping that requires a
large number of paired training data and often suffers from the problems of
high diversity and strong category bias in image scenes. In order to tackle
these problems, we propose a novel BiLevel (BiL) learning paradigm that
alternates the learning of two models, respectively at an instance-specific
(IS) and a general-purpose (GP) level. In each scene, the IS model learns to
maintain the specific scene attributes. It is initialized by the GP model that
learns from all the scenes to obtain the generalizable translation knowledge.
This GP initialization gives the IS model an efficient starting point, thus
enabling its fast adaptation to the new scene with scarce training data. We
conduct extensive I2I translation experiments on human face and street view
datasets. Quantitative results validate that our approach can significantly
boost the performance of classical I2I translation models, such as PG2 and
Pix2Pix. Our visualization results show both higher image quality and more
appropriate instance-specific details, e.g., the translated image of a person
looks more like that person in terms of identity
- …