1,855 research outputs found
Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration
Compression plays an important role on the efficient transmission and storage
of images and videos through band-limited systems such as streaming services,
virtual reality or videogames. However, compression unavoidably leads to
artifacts and the loss of the original information, which may severely degrade
the visual quality. For these reasons, quality enhancement of compressed images
has become a popular research topic. While most state-of-the-art image
restoration methods are based on convolutional neural networks, other
transformers-based methods such as SwinIR, show impressive performance on these
tasks.
In this paper, we explore the novel Swin Transformer V2, to improve SwinIR
for image super-resolution, and in particular, the compressed input scenario.
Using this method we can tackle the major issues in training transformer vision
models, such as training instability, resolution gaps between pre-training and
fine-tuning, and hunger on data. We conduct experiments on three representative
tasks: JPEG compression artifacts removal, image super-resolution (classical
and lightweight), and compressed image super-resolution. Experimental results
demonstrate that our method, Swin2SR, can improve the training convergence and
performance of SwinIR, and is a top-5 solution at the "AIM 2022 Challenge on
Super-Resolution of Compressed Image and Video".Comment: European Conference on Computer Vision (ECCV 2022) Workshop
Learned Quality Enhancement via Multi-Frame Priors for HEVC Compliant Low-Delay Applications
Networked video applications, e.g., video conferencing, often suffer from
poor visual quality due to unexpected network fluctuation and limited
bandwidth. In this paper, we have developed a Quality Enhancement Network
(QENet) to reduce the video compression artifacts, leveraging the spatial and
temporal priors generated by respective multi-scale convolutions spatially and
warped temporal predictions in a recurrent fashion temporally. We have
integrated this QENet as a standard-alone post-processing subsystem to the High
Efficiency Video Coding (HEVC) compliant decoder. Experimental results show
that our QENet demonstrates the state-of-the-art performance against default
in-loop filters in HEVC and other deep learning based methods with noticeable
objective gains in Peak-Signal-to-Noise Ratio (PSNR) and subjective gains
visually
- …