46,462 research outputs found
The Perception-Distortion Tradeoff
Image restoration algorithms are typically evaluated by some distortion
measure (e.g. PSNR, SSIM, IFC, VIF) or by human opinion scores that quantify
perceived perceptual quality. In this paper, we prove mathematically that
distortion and perceptual quality are at odds with each other. Specifically, we
study the optimal probability for correctly discriminating the outputs of an
image restoration algorithm from real images. We show that as the mean
distortion decreases, this probability must increase (indicating worse
perceptual quality). As opposed to the common belief, this result holds true
for any distortion measure, and is not only a problem of the PSNR or SSIM
criteria. We also show that generative-adversarial-nets (GANs) provide a
principled way to approach the perception-distortion bound. This constitutes
theoretical support to their observed success in low-level vision tasks. Based
on our analysis, we propose a new methodology for evaluating image restoration
methods, and use it to perform an extensive comparison between recent
super-resolution algorithms.Comment: CVPR 2018 (long oral presentation), see talk at:
https://youtu.be/_aXbGqdEkjk?t=39m43
UG^2: a Video Benchmark for Assessing the Impact of Image Restoration and Enhancement on Automatic Visual Recognition
Advances in image restoration and enhancement techniques have led to
discussion about how such algorithmscan be applied as a pre-processing step to
improve automatic visual recognition. In principle, techniques like deblurring
and super-resolution should yield improvements by de-emphasizing noise and
increasing signal in an input image. But the historically divergent goals of
the computational photography and visual recognition communities have created a
significant need for more work in this direction. To facilitate new research,
we introduce a new benchmark dataset called UG^2, which contains three
difficult real-world scenarios: uncontrolled videos taken by UAVs and manned
gliders, as well as controlled videos taken on the ground. Over 160,000
annotated frames forhundreds of ImageNet classes are available, which are used
for baseline experiments that assess the impact of known and unknown image
artifacts and other conditions on common deep learning-based object
classification approaches. Further, current image restoration and enhancement
techniques are evaluated by determining whether or not theyimprove baseline
classification performance. Results showthat there is plenty of room for
algorithmic innovation, making this dataset a useful tool going forward.Comment: Supplemental material: https://goo.gl/vVM1xe, Dataset:
https://goo.gl/AjA6En, CVPR 2018 Prize Challenge: ug2challenge.or
- …