11 research outputs found
Encoding in the Dark Grand Challenge:An Overview
A big part of the video content we consume from video providers consists of
genres featuring low-light aesthetics. Low light sequences have special
characteristics, such as spatio-temporal varying acquisition noise and light
flickering, that make the encoding process challenging. To deal with the
spatio-temporal incoherent noise, higher bitrates are used to achieve high
objective quality. Additionally, the quality assessment metrics and methods
have not been designed, trained or tested for this type of content. This has
inspired us to trigger research in that area and propose a Grand Challenge on
encoding low-light video sequences. In this paper, we present an overview of
the proposed challenge, and test state-of-the-art methods that will be part of
the benchmark methods at the stage of the participants' deliverable assessment.
From this exploration, our results show that VVC already achieves a high
performance compared to simply denoising the video source prior to encoding.
Moreover, the quality of the video streams can be further improved by employing
a post-processing image enhancement method
A comprehensive video codec comparison
In this paper, we compare the video codecs AV1 (version 1.0.0-2242 from August 2019), HEVC (HM and x265), AVC (x264), the exploration software JEM which is based on HEVC, and the VVC (successor of HEVC) test model VTM (version 4.0 from February 2019) under two fair and balanced configurations: All Intra for the assessment of intra coding and Maximum Coding Efficiency with all codecs being tuned for their best coding efficiency settings. VTM achieves the highest coding efficiency in both configurations, followed by JEM and AV1. The worst coding efficiency is achieved by x264 and x265, even in the placebo preset for highest coding efficiency. AV1 gained a lot in terms of coding efficiency compared to previous versions and now outperforms HM by 24% BD-Rate gains. VTM gains 5% over AV1 in terms of BD-Rates. By reporting separate numbers for JVET and AOM test sequences, it is ensured that no bias in the test sequences exists. When comparing only intra coding tools, it is observed that the complexity increases exponentially for linearly increasing coding efficiency
The impact of Tiles on video coding performance: a case study on HEVC and AV1 video coding standards
Optimizing Image Compression via Joint Learning with Denoising
High levels of noise usually exist in today's captured images due to the
relatively small sensors equipped in the smartphone cameras, where the noise
brings extra challenges to lossy image compression algorithms. Without the
capacity to tell the difference between image details and noise, general image
compression methods allocate additional bits to explicitly store the undesired
image noise during compression and restore the unpleasant noisy image during
decompression. Based on the observations, we optimize the image compression
algorithm to be noise-aware as joint denoising and compression to resolve the
bits misallocation problem. The key is to transform the original noisy images
to noise-free bits by eliminating the undesired noise during compression, where
the bits are later decompressed as clean images. Specifically, we propose a
novel two-branch, weight-sharing architecture with plug-in feature denoisers to
allow a simple and effective realization of the goal with little computational
cost. Experimental results show that our method gains a significant improvement
over the existing baseline methods on both the synthetic and real-world
datasets. Our source code is available at
https://github.com/felixcheng97/DenoiseCompression.Comment: Accepted to ECCV 202