265 research outputs found

    Multi-loop quality scalability based on high efficiency video coding

    Get PDF
    Scalable video coding performance largely depends on the underlying single layer coding efficiency. In this paper, the quality scalability capabilities are evaluated on a base of the new High Efficiency Video Coding (HEVC) standard under development. To enable the evaluation, a multi-loop codec has been designed using HEVC. Adaptive inter-layer prediction is realized by including the lower layer in the reference list of the enhancement layer. As a result, adaptive scalability on frame level and on prediction unit level is accomplished. Compared to single layer coding, 19.4% Bjontegaard Delta bitrate increase is measured over approximately a 30dB to 40dB PSNR range. When compared to simulcast, 20.6% bitrate reduction can be achieved. Under equivalent conditions, the presented technique achieves 43.8% bitrate reduction over Coarse Grain Scalability of the SVC - H.264/AVC-based standard

    No-reference bitstream-based impairment detection for high efficiency video coding

    Get PDF
    Video distribution over error-prone Internet Protocol (IP) networks results in visual impairments on the received video streams. Objective impairment detection algorithms are crucial for maintaining a high Quality of Experience (QoE) as provided with IPTV distribution. There is a lot of research invested in H.264/AVC impairment detection models and questions rise if these turn obsolete with a transition to the successor of H.264/AVC, called High Efficiency Video Coding (HEVC). In this paper, first we show that impairments on HEVC compressed sequences are more visible compaired to H.264/AVC encoded sequences. We also show that an impairment detection model designed for H.264/AVC could be reused on HEVC, but that caution is advised. A more accurate model taking into account content classification needed slight modification to remain applicable for HEVC compression video content

    End-to-end security for video distribution

    Get PDF

    3D video compression based on high efficiency video coding

    Get PDF
    With the advent of autostereoscopic displays, questions rise on how to efficiently compress the video information needed by such displays. Additionally, for gradual market acceptance of this new technology it is valuable to have a solution offering forward compatibility with stereo 3D video as it is used nowadays. In this paper, a multiview compression scheme making use of the efficient single-view coding tools used in High Efficiency Video Coding (HEVC) is provided. Although efficient single view compression can be obtained with HEVC, a multiview adaptation of this standard under development is proposed, offering additional coding gains. On average, for the texture information, the total bitrate can be reduced by 37.2% compared to simulcast HEVC. For depth map compression, gains largely depend on the quality of the captured content. Additionally, a forward compatible solution is proposed offering the possibility for a gradual upgrade from H.264/AVC based stereoscopic 3D systems to an HEVC-based autostereoscopic environment. With the proposed system, significant rate savings compared to Multiview Video Coding (MVC) are presented(1)

    High-Level Synthesis Based VLSI Architectures for Video Coding

    Get PDF
    High Efficiency Video Coding (HEVC) is state-of-the-art video coding standard. Emerging applications like free-viewpoint video, 360degree video, augmented reality, 3D movies etc. require standardized extensions of HEVC. The standardized extensions of HEVC include HEVC Scalable Video Coding (SHVC), HEVC Multiview Video Coding (MV-HEVC), MV-HEVC+ Depth (3D-HEVC) and HEVC Screen Content Coding. 3D-HEVC is used for applications like view synthesis generation, free-viewpoint video. Coding and transmission of depth maps in 3D-HEVC is used for the virtual view synthesis by the algorithms like Depth Image Based Rendering (DIBR). As first step, we performed the profiling of the 3D-HEVC standard. Computational intensive parts of the standard are identified for the efficient hardware implementation. One of the computational intensive part of the 3D-HEVC, HEVC and H.264/AVC is the Interpolation Filtering used for Fractional Motion Estimation (FME). The hardware implementation of the interpolation filtering is carried out using High-Level Synthesis (HLS) tools. Xilinx Vivado Design Suite is used for the HLS implementation of the interpolation filters of HEVC and H.264/AVC. The complexity of the digital systems is greatly increased. High-Level Synthesis is the methodology which offers great benefits such as late architectural or functional changes without time consuming in rewriting of RTL-code, algorithms can be tested and evaluated early in the design cycle and development of accurate models against which the final hardware can be verified

    A 249-Mpixel/s HEVC Video-Decoder Chip for 4K Ultra-HD Applications

    Get PDF
    High Efficiency Video Coding, the latest video standard, uses larger and variable-sized coding units and longer interpolation filters than [H.264 over AVC] to better exploit redundancy in video signals. These algorithmic techniques enable a 50% decrease in bitrate at the cost of computational complexity, external memory bandwidth, and, for ASIC implementations, on-chip SRAM of the video codec. This paper describes architectural optimizations for an HEVC video decoder chip. The chip uses a two-stage subpipelining scheme to reduce on-chip SRAM by 56 kbytes-a 32% reduction. A high-throughput read-only cache combined with DRAM-latency-aware memory mapping reduces DRAM bandwidth by 67%. The chip is built for HEVC Working Draft 4 Low Complexity configuration and occupies 1.77 mm[superscript 2] in 40-nm CMOS. It performs 4K Ultra HD 30-fps video decoding at 200 MHz while consuming 1.19 [nJ over pixel] of normalized system power.Texas Instruments Incorporate
    • …
    corecore