5 research outputs found

    Increased compression efficiency of AVC and HEVC CABAC by precise statistics estimation

    Get PDF
    The paper presents Improved Adaptive Arithmetic Coding algorithm for application in future video compression technology. The proposed solution is based on the Context-based Adaptive Binary Arithmetic Coding (CABAC) technique and uses the authors’ mechanism of symbols probability estimation that exploits Context-Tree Weighting (CTW) technique. This paper proposes the version of the algorithm, that allows an arbitrary selection of depth of context trees, when activating the algorithm in the framework of the AVC or HEVC video encoders. The algorithm has been tested in terms of coding efficiency of data and its computational complexity. Results showed, that depending of depth of context trees from 0.1% to 0.86% reduction of bitrate is achieved, when using the algorithm in the HEVC video encoder and 0.4% to 2.3% compression gain in the case of the AVC. The new solution increases complexity of entropy encoder itself, however, this does not translate into increase the complexity of the whole video encoder

    Analysis of the Limitations of Further Improvement of the Efficiency of VVC-CABAC

    No full text
    Hybrid video compression plays an invaluable role in digital video transmission and storage services and systems. It performs several-hundred-fold reduction in the amount of video data, which makes these systems much more efficient. An important element of hybrid video compression is entropy coding of the data. The state-of-the-art in this field is the newest variant of the Context-based Adaptive Binary Arithmetic Coding entropy compression algorithm which recently became part of the new Versatile Video Coding technology. This work is a part of research that is currently underway to further improve the Context-based Adaptive Binary Arithmetic Coding technique. This paper provides analysis of the potential for further improvement of the Context-based Adaptive Binary Arithmetic Coding technique by more accurate calculation of probabilities of data symbols. The mentioned technique calculates those probabilities by the use of idea of the two-parameters hypothesis. For the needs of analysis presented in this paper, an extension of the aforementioned idea was proposed which consists of three- and four-parameters hypothesis. In addition, the paper shows the importance of proper calibration of parameter values of the method on efficiency of data compression. Results of experiments show that for the considered in the paper variants of the algorithm improvement the possible efficiency gain is at levels 0.11% and 0.167%, for the three- and four-parameter hypothesis, respectively
    corecore