1,733 research outputs found
No-reference Image Denoising Quality Assessment
A wide variety of image denoising methods are available now. However, the
performance of a denoising algorithm often depends on individual input noisy
images as well as its parameter setting. In this paper, we present a
no-reference image denoising quality assessment method that can be used to
select for an input noisy image the right denoising algorithm with the optimal
parameter setting. This is a challenging task as no ground truth is available.
This paper presents a data-driven approach to learn to predict image denoising
quality. Our method is based on the observation that while individual existing
quality metrics and denoising models alone cannot robustly rank denoising
results, they often complement each other. We accordingly design denoising
quality features based on these existing metrics and models and then use Random
Forests Regression to aggregate them into a more powerful unified metric. Our
experiments on images with various types and levels of noise show that our
no-reference denoising quality assessment method significantly outperforms the
state-of-the-art quality metrics. This paper also provides a method that
leverages our quality assessment method to automatically tune the parameter
settings of a denoising algorithm for an input noisy image to produce an
optimal denoising result.Comment: 17 pages, 41 figures, accepted by Computer Vision Conference (CVC)
201
A Reduced Reference Image Quality Measure Using Bessel K Forms Model for Tetrolet Coefficients
In this paper, we introduce a Reduced Reference Image Quality Assessment
(RRIQA) measure based on the natural image statistic approach. A new adaptive
transform called "Tetrolet" is applied to both reference and distorted images.
To model the marginal distribution of tetrolet coefficients Bessel K Forms
(BKF) density is proposed. Estimating the parameters of this distribution
allows to summarize the reference image with a small amount of side
information. Five distortion measures based on the BKF parameters of the
original and processed image are used to predict quality scores. A comparison
between these measures is presented showing a good consistency with human
judgment
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
While it is nearly effortless for humans to quickly assess the perceptual
similarity between two images, the underlying processes are thought to be quite
complex. Despite this, the most widely used perceptual metrics today, such as
PSNR and SSIM, are simple, shallow functions, and fail to account for many
nuances of human perception. Recently, the deep learning community has found
that features of the VGG network trained on ImageNet classification has been
remarkably useful as a training loss for image synthesis. But how perceptual
are these so-called "perceptual losses"? What elements are critical for their
success? To answer these questions, we introduce a new dataset of human
perceptual similarity judgments. We systematically evaluate deep features
across different architectures and tasks and compare them with classic metrics.
We find that deep features outperform all previous metrics by large margins on
our dataset. More surprisingly, this result is not restricted to
ImageNet-trained VGG features, but holds across different deep architectures
and levels of supervision (supervised, self-supervised, or even unsupervised).
Our results suggest that perceptual similarity is an emergent property shared
across deep visual representations.Comment: Accepted to CVPR 2018; Code and data available at
https://www.github.com/richzhang/PerceptualSimilarit
- …