1,210 research outputs found
Generalized Adversarially Learned Inference
Allowing effective inference of latent vectors while training GANs can
greatly increase their applicability in various downstream tasks. Recent
approaches, such as ALI and BiGAN frameworks, develop methods of inference of
latent variables in GANs by adversarially training an image generator along
with an encoder to match two joint distributions of image and latent vector
pairs. We generalize these approaches to incorporate multiple layers of
feedback on reconstructions, self-supervision, and other forms of supervision
based on prior or learned knowledge about the desired solutions. We achieve
this by modifying the discriminator's objective to correctly identify more than
two joint distributions of tuples of an arbitrary number of random variables
consisting of images, latent vectors, and other variables generated through
auxiliary tasks, such as reconstruction and inpainting or as outputs of
suitable pre-trained models. We design a non-saturating maximization objective
for the generator-encoder pair and prove that the resulting adversarial game
corresponds to a global optimum that simultaneously matches all the
distributions. Within our proposed framework, we introduce a novel set of
techniques for providing self-supervised feedback to the model based on
properties, such as patch-level correspondence and cycle consistency of
reconstructions. Through comprehensive experiments, we demonstrate the
efficacy, scalability, and flexibility of the proposed approach for a variety
of tasks.Comment: AAAI 2021 (accepted for publication
Adaptive Density Estimation for Generative Models
Unsupervised learning of generative models has seen tremendous progress over
recent years, in particular due to generative adversarial networks (GANs),
variational autoencoders, and flow-based models. GANs have dramatically
improved sample quality, but suffer from two drawbacks: (i) they mode-drop,
i.e., do not cover the full support of the train data, and (ii) they do not
allow for likelihood evaluations on held-out data. In contrast,
likelihood-based training encourages models to cover the full support of the
train data, but yields poorer samples. These mutual shortcomings can in
principle be addressed by training generative latent variable models in a
hybrid adversarial-likelihood manner. However, we show that commonly made
parametric assumptions create a conflict between them, making successful hybrid
models non trivial. As a solution, we propose to use deep invertible
transformations in the latent variable decoder. This approach allows for
likelihood computations in image space, is more efficient than fully invertible
models, and can take full advantage of adversarial training. We show that our
model significantly improves over existing hybrid models: offering GAN-like
samples, IS and FID scores that are competitive with fully adversarial models,
and improved likelihood scores
Improved Techniques for Adversarial Discriminative Domain Adaptation
Adversarial discriminative domain adaptation (ADDA) is an efficient framework
for unsupervised domain adaptation in image classification, where the source
and target domains are assumed to have the same classes, but no labels are
available for the target domain. We investigate whether we can improve
performance of ADDA with a new framework and new loss formulations. Following
the framework of semi-supervised GANs, we first extend the discriminator output
over the source classes, in order to model the joint distribution over domain
and task. We thus leverage on the distribution over the source encoder
posteriors (which is fixed during adversarial training) and propose maximum
mean discrepancy (MMD) and reconstruction-based loss functions for aligning the
target encoder distribution to the source domain. We compare and provide a
comprehensive analysis of how our framework and loss formulations extend over
simple multi-class extensions of ADDA and other discriminative variants of
semi-supervised GANs. In addition, we introduce various forms of regularization
for stabilizing training, including treating the discriminator as a denoising
autoencoder and regularizing the target encoder with source examples to reduce
overfitting under a contraction mapping (i.e., when the target per-class
distributions are contracting during alignment with the source). Finally, we
validate our framework on standard domain adaptation datasets, such as SVHN and
MNIST. We also examine how our framework benefits recognition problems based on
modalities that lack training data, by introducing and evaluating on a
neuromorphic vision sensing (NVS) sign language recognition dataset, where the
source and target domains constitute emulated and real neuromorphic spike
events respectively. Our results on all datasets show that our proposal
competes or outperforms the state-of-the-art in unsupervised domain adaptation.Comment: To appear in IEEE Transactions on Image Processin
Two-Level Adversarial Visual-Semantic Coupling for Generalized Zero-shot Learning
The performance of generative zero-shot methods mainly depends on the quality
of generated features and how well the model facilitates knowledge transfer
between visual and semantic domains. The quality of generated features is a
direct consequence of the ability of the model to capture the several modes of
the underlying data distribution. To address these issues, we propose a new
two-level joint maximization idea to augment the generative network with an
inference network during training which helps our model capture the several
modes of the data and generate features that better represent the underlying
data distribution. This provides strong cross-modal interaction for effective
transfer of knowledge between visual and semantic domains. Furthermore,
existing methods train the zero-shot classifier either on generate synthetic
image features or latent embeddings produced by leveraging representation
learning. In this work, we unify these paradigms into a single model which in
addition to synthesizing image features, also utilizes the representation
learning capabilities of the inference network to provide discriminative
features for the final zero-shot recognition task. We evaluate our approach on
four benchmark datasets i.e. CUB, FLO, AWA1 and AWA2 against several
state-of-the-art methods, and show its performance. We also perform ablation
studies to analyze and understand our method more carefully for the Generalized
Zero-shot Learning task.Comment: Under Submissio
- …