1,296 research outputs found
PixColor: Pixel Recursive Colorization
We propose a novel approach to automatically produce multiple colorized
versions of a grayscale image. Our method results from the observation that the
task of automated colorization is relatively easy given a low-resolution
version of the color image. We first train a conditional PixelCNN to generate a
low resolution color for a given grayscale image. Then, given the generated
low-resolution color image and the original grayscale image as inputs, we train
a second CNN to generate a high-resolution colorization of an image. We
demonstrate that our approach produces more diverse and plausible colorizations
than existing methods, as judged by human raters in a "Visual Turing Test"
The Devil is in the Decoder: Classification, Regression and GANs
Many machine vision applications, such as semantic segmentation and depth
prediction, require predictions for every pixel of the input image. Models for
such problems usually consist of encoders which decrease spatial resolution
while learning a high-dimensional representation, followed by decoders who
recover the original input resolution and result in low-dimensional
predictions. While encoders have been studied rigorously, relatively few
studies address the decoder side. This paper presents an extensive comparison
of a variety of decoders for a variety of pixel-wise tasks ranging from
classification, regression to synthesis. Our contributions are: (1) Decoders
matter: we observe significant variance in results between different types of
decoders on various problems. (2) We introduce new residual-like connections
for decoders. (3) We introduce a novel decoder: bilinear additive upsampling.
(4) We explore prediction artifacts
Discriminative Region Proposal Adversarial Networks for High-Quality Image-to-Image Translation
Image-to-image translation has been made much progress with embracing
Generative Adversarial Networks (GANs). However, it's still very challenging
for translation tasks that require high quality, especially at high-resolution
and photorealism. In this paper, we present Discriminative Region Proposal
Adversarial Networks (DRPAN) for high-quality image-to-image translation. We
decompose the procedure of image-to-image translation task into three iterated
steps, first is to generate an image with global structure but some local
artifacts (via GAN), second is using our DRPnet to propose the most fake region
from the generated image, and third is to implement "image inpainting" on the
most fake region for more realistic result through a reviser, so that the
system (DRPAN) can be gradually optimized to synthesize images with more
attention on the most artifact local part. Experiments on a variety of
image-to-image translation tasks and datasets validate that our method
outperforms state-of-the-arts for producing high-quality translation results in
terms of both human perceptual studies and automatic quantitative measures.Comment: ECCV 201
Pixelated Semantic Colorization
While many image colorization algorithms have recently shown the capability
of producing plausible color versions from gray-scale photographs, they still
suffer from limited semantic understanding. To address this shortcoming, we
propose to exploit pixelated object semantics to guide image colorization. The
rationale is that human beings perceive and distinguish colors based on the
semantic categories of objects. Starting from an autoregressive model, we
generate image color distributions, from which diverse colored results are
sampled. We propose two ways to incorporate object semantics into the
colorization model: through a pixelated semantic embedding and a pixelated
semantic generator. Specifically, the proposed convolutional neural network
includes two branches. One branch learns what the object is, while the other
branch learns the object colors. The network jointly optimizes a color
embedding loss, a semantic segmentation loss and a color generation loss, in an
end-to-end fashion. Experiments on PASCAL VOC2012 and COCO-stuff reveal that
our network, when trained with semantic segmentation labels, produces more
realistic and finer results compared to the colorization state-of-the-art
- …