45 research outputs found
MAGAN: Margin Adaptation for Generative Adversarial Networks
We propose the Margin Adaptation for Generative Adversarial Networks (MAGANs)
algorithm, a novel training procedure for GANs to improve stability and
performance by using an adaptive hinge loss function. We estimate the
appropriate hinge loss margin with the expected energy of the target
distribution, and derive principled criteria for when to update the margin. We
prove that our method converges to its global optimum under certain
assumptions. Evaluated on the task of unsupervised image generation, the
proposed training procedure is simple yet robust on a diverse set of data, and
achieves qualitative and quantitative improvements compared to the
state-of-the-art
It Takes (Only) Two: Adversarial Generator-Encoder Networks
We present a new autoencoder-type architecture that is trainable in an
unsupervised mode, sustains both generation and inference, and has the quality
of conditional and unconditional samples boosted by adversarial learning.
Unlike previous hybrids of autoencoders and adversarial networks, the
adversarial game in our approach is set up directly between the encoder and the
generator, and no external mappings are trained in the process of learning. The
game objective compares the divergences of each of the real and the generated
data distributions with the prior distribution in the latent space. We show
that direct generator-vs-encoder game leads to a tight coupling of the two
components, resulting in samples and reconstructions of a comparable quality to
some recently-proposed more complex architectures
Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect
Despite being impactful on a variety of problems and applications, the
generative adversarial nets (GANs) are remarkably difficult to train. This
issue is formally analyzed by \cite{arjovsky2017towards}, who also propose an
alternative direction to avoid the caveats in the minmax two-player training of
GANs. The corresponding algorithm, called Wasserstein GAN (WGAN), hinges on the
1-Lipschitz continuity of the discriminator. In this paper, we propose a novel
approach to enforcing the Lipschitz continuity in the training procedure of
WGANs. Our approach seamlessly connects WGAN with one of the recent
semi-supervised learning methods. As a result, it gives rise to not only better
photo-realistic samples than the previous methods but also state-of-the-art
semi-supervised learning results. In particular, our approach gives rise to the
inception score of more than 5.0 with only 1,000 CIFAR-10 images and is the
first that exceeds the accuracy of 90% on the CIFAR-10 dataset using only 4,000
labeled images, to the best of our knowledge.Comment: Accepted as a conference paper in International Conference on
Learning Representation(ICLR). Xiang Wei and Boqing Gong contributed equally
in this wor