405 research outputs found

    Vector Quantization Video Encoder Using Hierarchical Cache Memory Scheme

    Get PDF
    A system compresses image blocks via successive hierarchical stages and motion encoders which employ caches updated by stack replacement algorithms. Initially, a background detector compares the present image block with a corresponding previously encoded image block and if similar, the background detector terminates the encoding procedure by setting a flag bit. Otherwise, the image block is decomposed into smaller present image subblocks. The smaller present image subblocks are each compared with a corresponding previously encoded image subblock of comparable size within the present image block. When a present image subblock is similar to a corresponding previously encoded image subblock, then the procedure is terminated by setting a flag bit. Alternatively, the present image subblock is forwarded to a motion encoder where it is compared with displaced image subblocks, which are formed by displacing previously encoded image subblocks by motion vectors that are stored in a cache, to derive a first distortion vector. When the first distortion vector is below a first threshold TM, the procedure is terminated and the present image subblock is encoded by setting flag bit and a cache index corresponding to the first distortion vector. Alternatively, the present image subblock is passed to a block matching encoder where it is compared with other previously encoded image subblocks to derive a second distortion vector. When the second distortion vector is below a second threshold Tm, the procedure is terminated by setting a flag bit, by generating the second distortion vector, and by updating the cache.Georgia Tech Research Corporatio

    Adaptive delivery of immersive 3D multi-view video over the Internet

    Get PDF
    The increase in Internet bandwidth and the developments in 3D video technology have paved the way for the delivery of 3D Multi-View Video (MVV) over the Internet. However, large amounts of data and dynamic network conditions result in frequent network congestion, which may prevent video packets from being delivered on time. As a consequence, the 3D video experience may well be degraded unless content-aware precautionary mechanisms and adaptation methods are deployed. In this work, a novel adaptive MVV streaming method is introduced which addresses the future generation 3D immersive MVV experiences with multi-view displays. When the user experiences network congestion, making it necessary to perform adaptation, the rate-distortion optimum set of views that are pre-determined by the server, are truncated from the delivered MVV streams. In order to maintain high Quality of Experience (QoE) service during the frequent network congestion, the proposed method involves the calculation of low-overhead additional metadata that is delivered to the client. The proposed adaptive 3D MVV streaming solution is tested using the MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) standard. Both extensive objective and subjective evaluations are presented, showing that the proposed method provides significant quality enhancement under the adverse network conditions

    Reducing Complexity on Coding Unit Partitioning in Video Coding: A Review

    Get PDF
    In this article, we present a survey on the low complexity video coding on a coding unit (CU) partitioning with the aim for researchers to understand the foundation of video coding and fast CU partition algorithms. Firstly, we introduce video coding technologies by explaining the trending standards and reference models. They are High Efficiency Video Coding (HEVC), Joint Exploration Test Model (JEM), and VVC, which introduce novel quadtree (QT), quadtree plus binary tree (QTBT), quadtree plus multi-type tree (QTMT) block partitioning with expensive computation complexity, respectively. Secondly, we present a comprehensive explanation of the time-consuming CU partitioning, especially for researchers who are not familiar with CU partitioning. The newer the video coding standard, the more flexible partition structures and the higher the computational complexity. Then, we provide a deep and comprehensive survey of recent and state-of-the-art researches. Finally, we include a discussion section about the advantages and disadvantage of heuristic based and learning based approaches for the readers to explore quickly the performance of the existing algorithms and their limitations. To our knowledge, it is the first comprehensive survey to provide sufficient information about fast CU partitioning on HEVC, JEM, and VVC

    CANF-VC++: Enhancing Conditional Augmented Normalizing Flows for Video Compression with Advanced Techniques

    Full text link
    Video has become the predominant medium for information dissemination, driving the need for efficient video codecs. Recent advancements in learned video compression have shown promising results, surpassing traditional codecs in terms of coding efficiency. However, challenges remain in integrating fragmented techniques and incorporating new tools into existing codecs. In this paper, we comprehensively review the state-of-the-art CANF-VC codec and propose CANF-VC++, an enhanced version that addresses these challenges. We systematically explore architecture design, reference frame type, training procedure, and entropy coding efficiency, leading to substantial coding improvements. CANF-VC++ achieves significant Bj{\o}ntegaard-Delta rate savings on conventional datasets UVG, HEVC Class B and MCL-JCV, outperforming the baseline CANF-VC and even the H.266 reference software VTM. Our work demonstrates the potential of integrating advancements in video compression and serves as inspiration for future research in the field

    Quality Scalability Compression on Single-Loop Solution in HEVC

    Get PDF
    This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC) standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance

    Shape representation and coding of visual objets in multimedia applications — An overview

    Get PDF
    Emerging multimedia applications have created the need for new functionalities in digital communications. Whereas existing compression standards only deal with the audio-visual scene at a frame level, it is now necessary to handle individual objects separately, thus allowing scalable transmission as well as interactive scene recomposition by the receiver. The future MPEG-4 standard aims at providing compression tools addressing these functionalities. Unlike existing frame-based standards, the corresponding coding schemes need to encode shape information explicitly. This paper reviews existing solutions to the problem of shape representation and coding. Region and contour coding techniques are presented and their performance is discussed, considering coding efficiency and rate-distortion control capability, as well as flexibility to application requirements such as progressive transmission, low-delay coding, and error robustnes

    Bayesian adaptive algorithm for fast coding unit decision in the High Efficiency Video Coding (HEVC) standard

    Get PDF
    The latest High Efficiency Video Coding standard (HEVC) provides a set of new coding tools to achieve a significantly higher coding efficiency than previous standards. In this standard, the pixels are first grouped into Coding Units (CU), then Prediction Units (PU), and finally Transform Units (TU). All these coding levels are organized into a quadtree-shaped arrangement that allows highly flexible data representation; however, they involve a very high computational complexity. In this paper, we propose an effective early CU depth decision algorithm to reduce the encoder complexity. Our proposal is based on a hierarchical approach, in which a hypothesis test is designed to make a decision at every CU depth, where the algorithm either produces an early termination or decides to evaluate the subsequent depth level. Moreover, the proposed method is able to adaptively estimate the parameters that define each hypothesis test, so that it adapts its behavior to the variable contents of the video sequences. The proposed method has been extensively tested, and the experimental results show that our proposal outperforms several state-of-the-art methods, achieving a significant reduction of the computational complexity (36.5% and 38.2% average reductions in coding time for two different encoder configurations) in exchange for very slight losses in coding performance (1.7% and 0.8% average bit rate increments).This work has been partially supported by the National Grant TEC2014-53390-P of the Spanish Ministry of Economy and Competitiveness

    Quality of Experience (QoE)-Aware Fast Coding Unit Size Selection for HEVC Intra-prediction

    Get PDF
    The exorbitant increase in the computational complexity of modern video coding standards, such as High Efficiency Video Coding (HEVC), is a compelling challenge for resource-constrained consumer electronic devices. For instance, the brute force evaluation of all possible combinations of available coding modes and quadtree-based coding structure in HEVC to determine the optimum set of coding parameters for a given content demand a substantial amount of computational and energy resources. Thus, the resource requirements for real time operation of HEVC has become a contributing factor towards the Quality of Experience (QoE) of the end users of emerging multimedia and future internet applications. In this context, this paper proposes a content-adaptive Coding Unit (CU) size selection algorithm for HEVC intra-prediction. The proposed algorithm builds content-specific weighted Support Vector Machine (SVM) models in real time during the encoding process, to provide an early estimate of CU size for a given content, avoiding the brute force evaluation of all possible coding mode combinations in HEVC. The experimental results demonstrate an average encoding time reduction of 52.38%, with an average Bjøntegaard Delta Bit Rate (BDBR) increase of 1.19% compared to the HM16.1 reference encoder. Furthermore, the perceptual visual quality assessments conducted through Video Quality Metric (VQM) show minimal visual quality impact on the reconstructed videos of the proposed algorithm compared to state-of-the-art approaches
    • …
    corecore