2 research outputs found

    Fast Inter-Prediction based on Decision Trees for AV1 encoding

    Full text link
    The AOMedia Video 1 (AV1) standard can achieve considerable compression efficiency thanks to the usage of many advanced tools and improvements, such as advanced inter-prediction modes. However, these come at the cost of high computational complexity of encoder, which may limit the benefits of the standard in practical applications. This paper shows that not all sequences benefit from using all such modes, which indicates that a number of encoder optimisations can be introduced to speed up AV1 encoding. A method based on decision trees is proposed to selectively decide whether to test all inter modes. Appropriate features are extracted and used to perform the decision for each block. Experimental results show that the proposed method can reduce the encoding time on average by 43.4\% with limited impact on the coding efficiency

    DeepQTMT: A Deep Learning Approach for Fast QTMT-based CU Partition of Intra-mode VVC

    Full text link
    Versatile Video Coding (VVC), as the latest standard, significantly improves the coding efficiency over its ancestor standard High Efficiency Video Coding (HEVC), but at the expense of sharply increased complexity. In VVC, the quad-tree plus multi-type tree (QTMT) structure of coding unit (CU) partition accounts for over 97% of the encoding time, due to the brute-force search for recursive rate-distortion (RD) optimization. Instead of the brute-force QTMT search, this paper proposes a deep learning approach to predict the QTMT-based CU partition, for drastically accelerating the encoding process of intra-mode VVC. First, we establish a large-scale database containing sufficient CU partition patterns with diverse video content, which can facilitate the data-driven VVC complexity reduction. Next, we propose a multi-stage exit CNN (MSE-CNN) model with an early-exit mechanism to determine the CU partition, in accord with the flexible QTMT structure at multiple stages. Then, we design an adaptive loss function for training the MSE-CNN model, synthesizing both the uncertain number of split modes and the target on minimized RD cost. Finally, a multi-threshold decision scheme is developed, achieving desirable trade-off between complexity and RD performance. Experimental results demonstrate that our approach can reduce the encoding time of VVC by 44.65%-66.88% with the negligible Bj{\o}ntegaard delta bit-rate (BD-BR) of 1.322%-3.188%, which significantly outperforms other state-of-the-art approaches.Comment: 14 pages, 10 figures, 7 tables. Published in IEEE Transactions on Image Processing (TIP), 202
    corecore