1,308 research outputs found
Adversarial nets with perceptual losses for text-to-image synthesis
Recent approaches in generative adversarial networks (GANs) can automatically
synthesize realistic images from descriptive text. Despite the overall fair
quality, the generated images often expose visible flaws that lack structural
definition for an object of interest. In this paper, we aim to extend state of
the art for GAN-based text-to-image synthesis by improving perceptual quality
of generated images. Differentiated from previous work, our synthetic image
generator optimizes on perceptual loss functions that measure pixel, feature
activation, and texture differences against a natural image. We present
visually more compelling synthetic images of birds and flowers generated from
text descriptions in comparison to some of the most prominent existing work
GP-GAN: Gender Preserving GAN for Synthesizing Faces from Landmarks
Facial landmarks constitute the most compressed representation of faces and
are known to preserve information such as pose, gender and facial structure
present in the faces. Several works exist that attempt to perform high-level
face-related analysis tasks based on landmarks. In contrast, in this work, an
attempt is made to tackle the inverse problem of synthesizing faces from their
respective landmarks. The primary aim of this work is to demonstrate that
information preserved by landmarks (gender in particular) can be further
accentuated by leveraging generative models to synthesize corresponding faces.
Though the problem is particularly challenging due to its ill-posed nature, we
believe that successful synthesis will enable several applications such as
boosting performance of high-level face related tasks using landmark points and
performing dataset augmentation. To this end, a novel face-synthesis method
known as Gender Preserving Generative Adversarial Network (GP-GAN) that is
guided by adversarial loss, perceptual loss and a gender preserving loss is
presented. Further, we propose a novel generator sub-network UDeNet for GP-GAN
that leverages advantages of U-Net and DenseNet architectures. Extensive
experiments and comparison with recent methods are performed to verify the
effectiveness of the proposed method.Comment: 6 pages, 5 figures, this paper is accepted as 2018 24th International
Conference on Pattern Recognition (ICPR2018
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
We present a new method for synthesizing high-resolution photo-realistic
images from semantic label maps using conditional generative adversarial
networks (conditional GANs). Conditional GANs have enabled a variety of
applications, but the results are often limited to low-resolution and still far
from realistic. In this work, we generate 2048x1024 visually appealing results
with a novel adversarial loss, as well as new multi-scale generator and
discriminator architectures. Furthermore, we extend our framework to
interactive visual manipulation with two additional features. First, we
incorporate object instance segmentation information, which enables object
manipulations such as removing/adding objects and changing the object category.
Second, we propose a method to generate diverse results given the same input,
allowing users to edit the object appearance interactively. Human opinion
studies demonstrate that our method significantly outperforms existing methods,
advancing both the quality and the resolution of deep image synthesis and
editing.Comment: v2: CVPR camera ready, adding more results for edge-to-photo example
Contextual-based Image Inpainting: Infer, Match, and Translate
We study the task of image inpainting, which is to fill in the missing region
of an incomplete image with plausible contents. To this end, we propose a
learning-based approach to generate visually coherent completion given a
high-resolution image with missing components. In order to overcome the
difficulty to directly learn the distribution of high-dimensional image data,
we divide the task into inference and translation as two separate steps and
model each step with a deep neural network. We also use simple heuristics to
guide the propagation of local textures from the boundary to the hole. We show
that, by using such techniques, inpainting reduces to the problem of learning
two image-feature translation functions in much smaller space and hence easier
to train. We evaluate our method on several public datasets and show that we
generate results of better visual quality than previous state-of-the-art
methods.Comment: ECCV 2018 camera read
- …