4,657 research outputs found
document without permission of its author may be prohibited by law. FAST EVALUATION AND INTERPOLATION
of photocopies or other reproductions of copyrighted material. Any copying of thi
Adversarial nets with perceptual losses for text-to-image synthesis
Recent approaches in generative adversarial networks (GANs) can automatically
synthesize realistic images from descriptive text. Despite the overall fair
quality, the generated images often expose visible flaws that lack structural
definition for an object of interest. In this paper, we aim to extend state of
the art for GAN-based text-to-image synthesis by improving perceptual quality
of generated images. Differentiated from previous work, our synthetic image
generator optimizes on perceptual loss functions that measure pixel, feature
activation, and texture differences against a natural image. We present
visually more compelling synthetic images of birds and flowers generated from
text descriptions in comparison to some of the most prominent existing work
Adversarial Learning of Semantic Relevance in Text to Image Synthesis
We describe a new approach that improves the training of generative
adversarial nets (GANs) for synthesizing diverse images from a text input. Our
approach is based on the conditional version of GANs and expands on previous
work leveraging an auxiliary task in the discriminator. Our generated images
are not limited to certain classes and do not suffer from mode collapse while
semantically matching the text input. A key to our training methods is how to
form positive and negative training examples with respect to the class label of
a given image. Instead of selecting random training examples, we perform
negative sampling based on the semantic distance from a positive example in the
class. We evaluate our approach using the Oxford-102 flower dataset, adopting
the inception score and multi-scale structural similarity index (MS-SSIM)
metrics to assess discriminability and diversity of the generated images. The
empirical results indicate greater diversity in the generated images,
especially when we gradually select more negative training examples closer to a
positive example in the semantic space
- …