3,240 research outputs found
Generalized Adversarially Learned Inference
Allowing effective inference of latent vectors while training GANs can
greatly increase their applicability in various downstream tasks. Recent
approaches, such as ALI and BiGAN frameworks, develop methods of inference of
latent variables in GANs by adversarially training an image generator along
with an encoder to match two joint distributions of image and latent vector
pairs. We generalize these approaches to incorporate multiple layers of
feedback on reconstructions, self-supervision, and other forms of supervision
based on prior or learned knowledge about the desired solutions. We achieve
this by modifying the discriminator's objective to correctly identify more than
two joint distributions of tuples of an arbitrary number of random variables
consisting of images, latent vectors, and other variables generated through
auxiliary tasks, such as reconstruction and inpainting or as outputs of
suitable pre-trained models. We design a non-saturating maximization objective
for the generator-encoder pair and prove that the resulting adversarial game
corresponds to a global optimum that simultaneously matches all the
distributions. Within our proposed framework, we introduce a novel set of
techniques for providing self-supervised feedback to the model based on
properties, such as patch-level correspondence and cycle consistency of
reconstructions. Through comprehensive experiments, we demonstrate the
efficacy, scalability, and flexibility of the proposed approach for a variety
of tasks.Comment: AAAI 2021 (accepted for publication
Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models
Adversarial learning of probabilistic models has recently emerged as a
promising alternative to maximum likelihood. Implicit models such as generative
adversarial networks (GAN) often generate better samples compared to explicit
models trained by maximum likelihood. Yet, GANs sidestep the characterization
of an explicit density which makes quantitative evaluations challenging. To
bridge this gap, we propose Flow-GANs, a generative adversarial network for
which we can perform exact likelihood evaluation, thus supporting both
adversarial and maximum likelihood training. When trained adversarially,
Flow-GANs generate high-quality samples but attain extremely poor
log-likelihood scores, inferior even to a mixture model memorizing the training
data; the opposite is true when trained by maximum likelihood. Results on MNIST
and CIFAR-10 demonstrate that hybrid training can attain high held-out
likelihoods while retaining visual fidelity in the generated samples.Comment: AAAI 201
- …