475 research outputs found
Revisiting the Sample Adaptive Offset post-filter of VVC with Neural-Networks
The Sample Adaptive Offset (SAO) filter has been introduced in HEVC to reduce
general coding and banding artefacts in the reconstructed pictures, in
complement to the De-Blocking Filter (DBF) which reduces artifacts at block
boundaries specifically. The new video compression standard Versatile Video
Coding (VVC) reduces the BD-rate by about 36% at the same reconstruction
quality compared to HEVC. It implements an additional new in-loop Adaptive Loop
Filter (ALF) on top of the DBF and the SAO filter, the latter remaining
unchanged compared to HEVC. However, the relative performance of SAO in VVC has
been lowered significantly. In this paper, it is proposed to revisit the SAO
filter using Neural Networks (NN). The general principles of the SAO are kept,
but the a-priori classification of SAO is replaced with a set of neural
networks that determine which reconstructed samples should be corrected and in
which proportion. Similarly to the original SAO, some parameters are determined
at the encoder side and encoded per CTU. The average BD-rate gain of the
proposed SAO improves VVC by at least 2.3% in Random Access while the overall
complexity is kept relatively small compared to other NN-based methods
GPU Parallelization of HEVC In-Loop Filters
In the High Efficiency Video Coding (HEVC) standard, multiple decoding modules have been designed to take advantage of parallel processing. In particular, the HEVC in-loop filters (i.e., the deblocking filter and sample adaptive offset) were conceived to be exploited by parallel architectures. However, the type of the offered parallelism mostly suits the capabilities of multi-core CPUs, thus making a real challenge to efficiently exploit massively parallel architectures such as Graphic Processing Units (GPUs), mainly due to the existing data dependencies between the HEVC decoding procedures. In accordance, this paper presents a novel strategy to increase the amount of parallelism and the resulting performance of the HEVC in-loop filters on GPU devices. For this purpose, the proposed algorithm performs the HEVC filtering at frame-level and employs intrinsic GPU vector instructions. When compared to the state-of-the-art HEVC in-loop filter implementations, the proposed approach also reduces the amount of required memory transfers, thus further boosting the performance. Experimental results show that the proposed GPU in-loop filters deliver a significant improvement in decoding performance. For example, average frame rates of 76 frames per second (FPS) and 125 FPS for Ultra HD 4K are achieved on an embedded NVIDIA GPU for All Intra and Random Access configurations, respectively
JND-Based Perceptual Video Coding for 4:4:4 Screen Content Data in HEVC
The JCT-VC standardized Screen Content Coding (SCC) extension in the HEVC HM
RExt + SCM reference codec offers an impressive coding efficiency performance
when compared with HM RExt alone; however, it is not significantly perceptually
optimized. For instance, it does not include advanced HVS-based perceptual
coding methods, such as JND-based spatiotemporal masking schemes. In this
paper, we propose a novel JND-based perceptual video coding technique for HM
RExt + SCM. The proposed method is designed to further improve the compression
performance of HM RExt + SCM when applied to YCbCr 4:4:4 SC video data. In the
proposed technique, luminance masking and chrominance masking are exploited to
perceptually adjust the Quantization Step Size (QStep) at the Coding Block (CB)
level. Compared with HM RExt 16.10 + SCM 8.0, the proposed method considerably
reduces bitrates (Kbps), with a maximum reduction of 48.3%. In addition to
this, the subjective evaluations reveal that SC-PAQ achieves visually lossless
coding at very low bitrates.Comment: Preprint: 2018 IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP 2018
Complexity Analysis Of Next-Generation VVC Encoding and Decoding
While the next generation video compression standard, Versatile Video Coding
(VVC), provides a superior compression efficiency, its computational complexity
dramatically increases. This paper thoroughly analyzes this complexity for both
encoder and decoder of VVC Test Model 6, by quantifying the complexity
break-down for each coding tool and measuring the complexity and memory
requirements for VVC encoding/decoding. These extensive analyses are performed
for six video sequences of 720p, 1080p, and 2160p, under Low-Delay (LD),
Random-Access (RA), and All-Intra (AI) conditions (a total of 320
encoding/decoding). Results indicate that the VVC encoder and decoder are 5x
and 1.5x more complex compared to HEVC in LD, and 31x and 1.8x in AI,
respectively. Detailed analysis of coding tools reveals that in LD on average,
motion estimation tools with 53%, transformation and quantization with 22%, and
entropy coding with 7% dominate the encoding complexity. In decoding, loop
filters with 30%, motion compensation with 20%, and entropy decoding with 16%,
are the most complex modules. Moreover, the required memory bandwidth for VVC
encoding/decoding are measured through memory profiling, which are 30x and 3x
of HEVC. The reported results and insights are a guide for future research and
implementations of energy-efficient VVC encoder/decoder.Comment: IEEE ICIP 202
Learned Quality Enhancement via Multi-Frame Priors for HEVC Compliant Low-Delay Applications
Networked video applications, e.g., video conferencing, often suffer from
poor visual quality due to unexpected network fluctuation and limited
bandwidth. In this paper, we have developed a Quality Enhancement Network
(QENet) to reduce the video compression artifacts, leveraging the spatial and
temporal priors generated by respective multi-scale convolutions spatially and
warped temporal predictions in a recurrent fashion temporally. We have
integrated this QENet as a standard-alone post-processing subsystem to the High
Efficiency Video Coding (HEVC) compliant decoder. Experimental results show
that our QENet demonstrates the state-of-the-art performance against default
in-loop filters in HEVC and other deep learning based methods with noticeable
objective gains in Peak-Signal-to-Noise Ratio (PSNR) and subjective gains
visually
A two-stage video coding framework with both self-adaptive redundant dictionary and adaptively orthonormalized DCT basis
In this work, we propose a two-stage video coding framework, as an extension
of our previous one-stage framework in [1]. The two-stage frameworks consists
two different dictionaries. Specifically, the first stage directly finds the
sparse representation of a block with a self-adaptive dictionary consisting of
all possible inter-prediction candidates by solving an L0-norm minimization
problem using an improved orthogonal matching pursuit with embedded
orthonormalization (eOMP) algorithm, and the second stage codes the residual
using DCT dictionary adaptively orthonormalized to the subspace spanned by the
first stage atoms. The transition of the first stage and the second stage is
determined based on both stages' quantization stepsizes and a threshold. We
further propose a complete context adaptive entropy coder to efficiently code
the locations and the coefficients of chosen first stage atoms. Simulation
results show that the proposed coder significantly improves the RD performance
over our previous one-stage coder. More importantly, the two-stage coder, using
a fixed block size and inter-prediction only, outperforms the H.264 coder
(x264) and is competitive with the HEVC reference coder (HM) over a large rate
range
- …