4,996 research outputs found
Reducing the complexity of a multiview H.264/AVC and HEVC hybrid architecture
With the advent of 3D displays, an efficient encoder is required to compress the video information needed by them. Moreover, for gradual market acceptance of this new technology, it is advisable to offer backward compatibility with existing devices. Thus, a multiview H.264/Advance Video Coding (AVC) and High Efficiency Video Coding (HEVC) hybrid architecture was proposed in the standardization process of HEVC. However, it requires long encoding times due to the use of HEVC. With the aim of tackling this problem, this paper presents an algorithm that reduces the complexity of this hybrid architecture by reducing the encoding complexity of the HEVC views. By using Na < ve-Bayes classifiers, the proposed technique exploits the information gathered in the encoding of the H.264/AVC view to make decisions on the splitting of coding units in HEVC side views. Given the novelty of the proposal, the only similar work found in the literature is an unoptimized version of the algorithm presented here. Experimental results show that the proposed algorithm can achieve a good tradeoff between coding efficiency and complexity
Machine Learning based Efficient QT-MTT Partitioning Scheme for VVC Intra Encoders
The next-generation Versatile Video Coding (VVC) standard introduces a new
Multi-Type Tree (MTT) block partitioning structure that supports Binary-Tree
(BT) and Ternary-Tree (TT) splits in both vertical and horizontal directions.
This new approach leads to five possible splits at each block depth and thereby
improves the coding efficiency of VVC over that of the preceding High
Efficiency Video Coding (HEVC) standard, which only supports Quad-Tree (QT)
partitioning with a single split per block depth. However, MTT also has brought
a considerable impact on encoder computational complexity. In this paper, a
two-stage learning-based technique is proposed to tackle the complexity
overhead of MTT in VVC intra encoders. In our scheme, the input block is first
processed by a Convolutional Neural Network (CNN) to predict its spatial
features through a vector of probabilities describing the partition at each 4x4
edge. Subsequently, a Decision Tree (DT) model leverages this vector of spatial
features to predict the most likely splits at each block. Finally, based on
this prediction, only the N most likely splits are processed by the
Rate-Distortion (RD) process of the encoder. In order to train our CNN and DT
models on a wide range of image contents, we also propose a public VVC frame
partitioning dataset based on existing image dataset encoded with the VVC
reference software encoder. Our proposal relying on the top-3 configuration
reaches 46.6% complexity reduction for a negligible bitrate increase of 0.86%.
A top-2 configuration enables a higher complexity reduction of 69.8% for 2.57%
bitrate loss. These results emphasis a better trade-off between VTM intra
coding efficiency and complexity reduction compared to the state-of-the-art
solutions
Quality of Experience (QoE)-Aware Fast Coding Unit Size Selection for HEVC Intra-prediction
The exorbitant increase in the computational complexity of modern video coding standards, such as High Efficiency Video Coding (HEVC), is a compelling challenge for resource-constrained consumer electronic devices. For instance, the brute force evaluation of all possible combinations of available coding modes and quadtree-based coding structure in HEVC to determine the optimum set of coding parameters for a given content demand a substantial amount of computational and energy resources. Thus, the resource requirements for real time operation of HEVC has become a contributing factor towards the Quality of Experience (QoE) of the end users of emerging multimedia and future internet applications. In this context, this paper proposes a content-adaptive Coding Unit (CU) size selection algorithm for HEVC intra-prediction. The proposed algorithm builds content-specific weighted Support Vector Machine (SVM) models in real time during the encoding process, to provide an early estimate of CU size for a given content, avoiding the brute force evaluation of all possible coding mode combinations in HEVC. The experimental results demonstrate an average encoding time reduction of 52.38%, with an average Bjøntegaard Delta Bit Rate (BDBR) increase of 1.19% compared to the HM16.1 reference encoder. Furthermore, the perceptual visual quality assessments conducted through Video Quality Metric (VQM) show minimal visual quality impact on the reconstructed videos of the proposed algorithm compared to state-of-the-art approaches
Reducing Complexity on Coding Unit Partitioning in Video Coding: A Review
In this article, we present a survey on the low complexity video coding on a coding unit (CU) partitioning with the aim for researchers to understand the foundation of video coding and fast CU partition algorithms. Firstly, we introduce video coding technologies by explaining the trending standards and reference models. They are High Efficiency Video Coding (HEVC), Joint Exploration Test Model (JEM), and VVC, which introduce novel quadtree (QT), quadtree plus binary tree (QTBT), quadtree plus multi-type tree (QTMT) block partitioning with expensive computation complexity, respectively. Secondly, we present a comprehensive explanation of the time-consuming CU partitioning, especially for researchers who are not familiar with CU partitioning. The newer the video coding standard, the more flexible partition structures and the higher the computational complexity. Then, we provide a deep and comprehensive survey of recent and state-of-the-art researches. Finally, we include a discussion section about the advantages and disadvantage of heuristic based and learning based approaches for the readers to explore quickly the performance of the existing algorithms and their limitations. To our knowledge, it is the first comprehensive survey to provide sufficient information about fast CU partitioning on HEVC, JEM, and VVC
Efficient VVC Intra Prediction Based on Deep Feature Fusion and Probability Estimation
The ever-growing multimedia traffic has underscored the importance of
effective multimedia codecs. Among them, the up-to-date lossy video coding
standard, Versatile Video Coding (VVC), has been attracting attentions of video
coding community. However, the gain of VVC is achieved at the cost of
significant encoding complexity, which brings the need to realize fast encoder
with comparable Rate Distortion (RD) performance. In this paper, we propose to
optimize the VVC complexity at intra-frame prediction, with a two-stage
framework of deep feature fusion and probability estimation. At the first
stage, we employ the deep convolutional network to extract the spatialtemporal
neighboring coding features. Then we fuse all reference features obtained by
different convolutional kernels to determine an optimal intra coding depth. At
the second stage, we employ a probability-based model and the spatial-temporal
coherence to select the candidate partition modes within the optimal coding
depth. Finally, these selected depths and partitions are executed whilst
unnecessary computations are excluded. Experimental results on standard
database demonstrate the superiority of proposed method, especially for High
Definition (HD) and Ultra-HD (UHD) video sequences.Comment: 10 pages, 10 figure
A Bayesian Approach to Block Structure Inference in AV1-based Multi-rate Video Encoding
Due to differences in frame structure, existing multi-rate video encoding
algorithms cannot be directly adapted to encoders utilizing special reference
frames such as AV1 without introducing substantial rate-distortion loss. To
tackle this problem, we propose a novel bayesian block structure inference
model inspired by a modification to an HEVC-based algorithm. It estimates the
posterior probabilistic distributions of block partitioning, and adapts early
terminations in the RDO procedure accordingly. Experimental results show that
the proposed method provides flexibility for controlling the tradeoff between
speed and coding efficiency, and can achieve an average time saving of 36.1%
(up to 50.6%) with negligible bitrate cost.Comment: published in IEEE Data Compression Conference, 201
- …