2,794 research outputs found
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
While it is nearly effortless for humans to quickly assess the perceptual
similarity between two images, the underlying processes are thought to be quite
complex. Despite this, the most widely used perceptual metrics today, such as
PSNR and SSIM, are simple, shallow functions, and fail to account for many
nuances of human perception. Recently, the deep learning community has found
that features of the VGG network trained on ImageNet classification has been
remarkably useful as a training loss for image synthesis. But how perceptual
are these so-called "perceptual losses"? What elements are critical for their
success? To answer these questions, we introduce a new dataset of human
perceptual similarity judgments. We systematically evaluate deep features
across different architectures and tasks and compare them with classic metrics.
We find that deep features outperform all previous metrics by large margins on
our dataset. More surprisingly, this result is not restricted to
ImageNet-trained VGG features, but holds across different deep architectures
and levels of supervision (supervised, self-supervised, or even unsupervised).
Our results suggest that perceptual similarity is an emergent property shared
across deep visual representations.Comment: Accepted to CVPR 2018; Code and data available at
https://www.github.com/richzhang/PerceptualSimilarit
Discriminative Region Proposal Adversarial Networks for High-Quality Image-to-Image Translation
Image-to-image translation has been made much progress with embracing
Generative Adversarial Networks (GANs). However, it's still very challenging
for translation tasks that require high quality, especially at high-resolution
and photorealism. In this paper, we present Discriminative Region Proposal
Adversarial Networks (DRPAN) for high-quality image-to-image translation. We
decompose the procedure of image-to-image translation task into three iterated
steps, first is to generate an image with global structure but some local
artifacts (via GAN), second is using our DRPnet to propose the most fake region
from the generated image, and third is to implement "image inpainting" on the
most fake region for more realistic result through a reviser, so that the
system (DRPAN) can be gradually optimized to synthesize images with more
attention on the most artifact local part. Experiments on a variety of
image-to-image translation tasks and datasets validate that our method
outperforms state-of-the-arts for producing high-quality translation results in
terms of both human perceptual studies and automatic quantitative measures.Comment: ECCV 201
Full Resolution Image Compression with Recurrent Neural Networks
This paper presents a set of full-resolution lossy image compression methods
based on neural networks. Each of the architectures we describe can provide
variable compression rates during deployment without requiring retraining of
the network: each network need only be trained once. All of our architectures
consist of a recurrent neural network (RNN)-based encoder and decoder, a
binarizer, and a neural network for entropy coding. We compare RNN types (LSTM,
associative LSTM) and introduce a new hybrid of GRU and ResNet. We also study
"one-shot" versus additive reconstruction architectures and introduce a new
scaled-additive framework. We compare to previous work, showing improvements of
4.3%-8.8% AUC (area under the rate-distortion curve), depending on the
perceptual metric used. As far as we know, this is the first neural network
architecture that is able to outperform JPEG at image compression across most
bitrates on the rate-distortion curve on the Kodak dataset images, with and
without the aid of entropy coding.Comment: Updated with content for CVPR and removed supplemental material to an
external link for size limitation
Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
We propose a method for lossy image compression based on recurrent,
convolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000,
and JPEG as measured by MS-SSIM. We introduce three improvements over previous
research that lead to this state-of-the-art result. First, we show that
training with a pixel-wise loss weighted by SSIM increases reconstruction
quality according to several metrics. Second, we modify the recurrent
architecture to improve spatial diffusion, which allows the network to more
effectively capture and propagate image information through the network's
hidden state. Finally, in addition to lossless entropy coding, we use a
spatially adaptive bit allocation algorithm to more efficiently use the limited
number of bits to encode visually complex image regions. We evaluate our method
on the Kodak and Tecnick image sets and compare against standard codecs as well
recently published methods based on deep neural networks
- …