637 research outputs found
Neural Image Compression with a Diffusion-Based Decoder
Diffusion probabilistic models have recently achieved remarkable success in
generating high quality image and video data. In this work, we build on this
class of generative models and introduce a method for lossy compression of high
resolution images. The resulting codec, which we call DIffuson-based Residual
Augmentation Codec (DIRAC),is the first neural codec to allow smooth traversal
of the rate-distortion-perception tradeoff at test time, while obtaining
competitive performance with GAN-based methods in perceptual quality.
Furthermore, while sampling from diffusion probabilistic models is notoriously
expensive, we show that in the compression setting the number of steps can be
drastically reduced.Comment: v1: 26 pages, 13 figures v2: corrected typo in first author name in
arxiv metadat
Domain-Specific Fusion Of Objective Video Quality Metrics
Video processing algorithms like video upscaling, denoising, and compression are now increasingly optimized for perceptual quality metrics instead of signal distortion. This means that they may score well for metrics like video multi-method assessment fusion (VMAF), but this may be because of metric overfitting. This imposes the need for costly subjective quality assessments that cannot scale to large datasets and large parameter explorations. We propose a methodology that fuses multiple quality metrics based on small scale subjective testing in order to unlock their use at scale for specific application domains of interest. This is achieved by employing pseudo-random sampling of the resolution, quality range and test video content available, which is initially guided by quality metrics in order to cover the quality range useful to each application. The selected samples then undergo a subjective test, such as ITU-T P.910 absolute categorical rating, with the results of the test postprocessed and used as the means to derive the best combination of multiple objective metrics using support vector regression. We showcase the benefits of this approach in two applications: video encoding with and without perceptual preprocessing, and deep video denoising & upscaling of compressed content. For both applications, the derived fusion of metrics allows for a more robust alignment to mean opinion scores than a perceptually-uninformed combination of the original metrics themselves. The dataset and code is available at https://github.com/isize-tech/VideoQualityFusion
A Comprehensive Review of Deep Learning-based Single Image Super-resolution
Image super-resolution (SR) is one of the vital image processing methods that
improve the resolution of an image in the field of computer vision. In the last
two decades, significant progress has been made in the field of
super-resolution, especially by utilizing deep learning methods. This survey is
an effort to provide a detailed survey of recent progress in single-image
super-resolution in the perspective of deep learning while also informing about
the initial classical methods used for image super-resolution. The survey
classifies the image SR methods into four categories, i.e., classical methods,
supervised learning-based methods, unsupervised learning-based methods, and
domain-specific SR methods. We also introduce the problem of SR to provide
intuition about image quality metrics, available reference datasets, and SR
challenges. Deep learning-based approaches of SR are evaluated using a
reference dataset. Some of the reviewed state-of-the-art image SR methods
include the enhanced deep SR network (EDSR), cycle-in-cycle GAN (CinCGAN),
multiscale residual network (MSRN), meta residual dense network (Meta-RDN),
recurrent back-projection network (RBPN), second-order attention network (SAN),
SR feedback network (SRFBN) and the wavelet-based residual attention network
(WRAN). Finally, this survey is concluded with future directions and trends in
SR and open problems in SR to be addressed by the researchers.Comment: 56 Pages, 11 Figures, 5 Table
- …