1,444 research outputs found
Unsupervised Holistic Image Generation from Key Local Patches
We introduce a new problem of generating an image based on a small number of
key local patches without any geometric prior. In this work, key local patches
are defined as informative regions of the target object or scene. This is a
challenging problem since it requires generating realistic images and
predicting locations of parts at the same time. We construct adversarial
networks to tackle this problem. A generator network generates a fake image as
well as a mask based on the encoder-decoder framework. On the other hand, a
discriminator network aims to detect fake images. The network is trained with
three losses to consider spatial, appearance, and adversarial information. The
spatial loss determines whether the locations of predicted parts are correct.
Input patches are restored in the output image without much modification due to
the appearance loss. The adversarial loss ensures output images are realistic.
The proposed network is trained without supervisory signals since no labels of
key parts are required. Experimental results on six datasets demonstrate that
the proposed algorithm performs favorably on challenging objects and scenes.Comment: 16 page
Generative Face Completion
In this paper, we propose an effective face completion algorithm using a deep
generative model. Different from well-studied background completion, the face
completion task is more challenging as it often requires to generate
semantically new pixels for the missing key components (e.g., eyes and mouths)
that contain large appearance variations. Unlike existing nonparametric
algorithms that search for patches to synthesize, our algorithm directly
generates contents for missing regions based on a neural network. The model is
trained with a combination of a reconstruction loss, two adversarial losses and
a semantic parsing loss, which ensures pixel faithfulness and local-global
contents consistency. With extensive experimental results, we demonstrate
qualitatively and quantitatively that our model is able to deal with a large
area of missing pixels in arbitrary shapes and generate realistic face
completion results.Comment: Accepted by CVPR 201
Self-Supervised Feature Learning by Learning to Spot Artifacts
We introduce a novel self-supervised learning method based on adversarial
training. Our objective is to train a discriminator network to distinguish real
images from images with synthetic artifacts, and then to extract features from
its intermediate layers that can be transferred to other data domains and
tasks. To generate images with artifacts, we pre-train a high-capacity
autoencoder and then we use a damage and repair strategy: First, we freeze the
autoencoder and damage the output of the encoder by randomly dropping its
entries. Second, we augment the decoder with a repair network, and train it in
an adversarial manner against the discriminator. The repair network helps
generate more realistic images by inpainting the dropped feature entries. To
make the discriminator focus on the artifacts, we also make it predict what
entries in the feature were dropped. We demonstrate experimentally that
features learned by creating and spotting artifacts achieve state of the art
performance in several benchmarks.Comment: CVPR 2018 (spotlight
- …