9,143 research outputs found
Self-Supervised Feature Learning by Learning to Spot Artifacts
We introduce a novel self-supervised learning method based on adversarial
training. Our objective is to train a discriminator network to distinguish real
images from images with synthetic artifacts, and then to extract features from
its intermediate layers that can be transferred to other data domains and
tasks. To generate images with artifacts, we pre-train a high-capacity
autoencoder and then we use a damage and repair strategy: First, we freeze the
autoencoder and damage the output of the encoder by randomly dropping its
entries. Second, we augment the decoder with a repair network, and train it in
an adversarial manner against the discriminator. The repair network helps
generate more realistic images by inpainting the dropped feature entries. To
make the discriminator focus on the artifacts, we also make it predict what
entries in the feature were dropped. We demonstrate experimentally that
features learned by creating and spotting artifacts achieve state of the art
performance in several benchmarks.Comment: CVPR 2018 (spotlight
Discriminative Region Proposal Adversarial Networks for High-Quality Image-to-Image Translation
Image-to-image translation has been made much progress with embracing
Generative Adversarial Networks (GANs). However, it's still very challenging
for translation tasks that require high quality, especially at high-resolution
and photorealism. In this paper, we present Discriminative Region Proposal
Adversarial Networks (DRPAN) for high-quality image-to-image translation. We
decompose the procedure of image-to-image translation task into three iterated
steps, first is to generate an image with global structure but some local
artifacts (via GAN), second is using our DRPnet to propose the most fake region
from the generated image, and third is to implement "image inpainting" on the
most fake region for more realistic result through a reviser, so that the
system (DRPAN) can be gradually optimized to synthesize images with more
attention on the most artifact local part. Experiments on a variety of
image-to-image translation tasks and datasets validate that our method
outperforms state-of-the-arts for producing high-quality translation results in
terms of both human perceptual studies and automatic quantitative measures.Comment: ECCV 201
SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis
Synthesizing realistic images from human drawn sketches is a challenging
problem in computer graphics and vision. Existing approaches either need exact
edge maps, or rely on retrieval of existing photographs. In this work, we
propose a novel Generative Adversarial Network (GAN) approach that synthesizes
plausible images from 50 categories including motorcycles, horses and couches.
We demonstrate a data augmentation technique for sketches which is fully
automatic, and we show that the augmented data is helpful to our task. We
introduce a new network building block suitable for both the generator and
discriminator which improves the information flow by injecting the input image
at multiple scales. Compared to state-of-the-art image translation methods, our
approach generates more realistic images and achieves significantly higher
Inception Scores.Comment: Accepted to CVPR 201
The Missing Data Encoder: Cross-Channel Image Completion\\with Hide-And-Seek Adversarial Network
Image completion is the problem of generating whole images from fragments
only. It encompasses inpainting (generating a patch given its surrounding),
reverse inpainting/extrapolation (generating the periphery given the central
patch) as well as colorization (generating one or several channels given other
ones). In this paper, we employ a deep network to perform image completion,
with adversarial training as well as perceptual and completion losses, and call
it the ``missing data encoder'' (MDE). We consider several configurations based
on how the seed fragments are chosen. We show that training MDE for ``random
extrapolation and colorization'' (MDE-REC), i.e. using random
channel-independent fragments, allows a better capture of the image semantics
and geometry. MDE training makes use of a novel ``hide-and-seek'' adversarial
loss, where the discriminator seeks the original non-masked regions, while the
generator tries to hide them. We validate our models both qualitatively and
quantitatively on several datasets, showing their interest for image
completion, unsupervised representation learning as well as face occlusion
handling
Geometry-Aware Face Completion and Editing
Face completion is a challenging generation task because it requires
generating visually pleasing new pixels that are semantically consistent with
the unmasked face region. This paper proposes a geometry-aware Face Completion
and Editing NETwork (FCENet) by systematically studying facial geometry from
the unmasked region. Firstly, a facial geometry estimator is learned to
estimate facial landmark heatmaps and parsing maps from the unmasked face
image. Then, an encoder-decoder structure generator serves to complete a face
image and disentangle its mask areas conditioned on both the masked face image
and the estimated facial geometry images. Besides, since low-rank property
exists in manually labeled masks, a low-rank regularization term is imposed on
the disentangled masks, enforcing our completion network to manage occlusion
area with various shape and size. Furthermore, our network can generate diverse
results from the same masked input by modifying estimated facial geometry,
which provides a flexible mean to edit the completed face appearance. Extensive
experimental results qualitatively and quantitatively demonstrate that our
network is able to generate visually pleasing face completion results and edit
face attributes as well
- …