17 research outputs found

    Parallel video decoding in the emerging HEVC standard

    Get PDF
    In this paper we propose and evaluate a parallelization strategy for the emerging HEVC video coding standard. The proposed strategy is based on entropy slices which allows exploiting parallelism in the entropy decoding stage while maintaining high coding efficiency. Our approach requires to encode videos with one entropy slice per LCU row in order to decode multiple LCU rows in a wavefront parallel manner. Evaluations performed on a PC with 12 Intel Xeon cores running at 3.3 GHz show that it is possible to achieve real-time performance for 1920×1080p50 (53.1 fps) and 2560×1600 (29.5fps) video resolutions with speedups of 5.2× and 6.3× compared to sequential execution, respectively

    Circuit implementations for high-efficiency video coding tools

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 71-72).High-Efficiency Video Coding (HEVC) is planned to be the successor video standard to the popular Advanced Video Coding (H.264/AVC) with a targeted 2x improvement in compression at the same quality. This improvement comes at the cost of increased complexity through the addition of new coding tools and increased computation in existing tools. The ever-increasing demand for higher resolution video further adds to the computation cost. In this work, digital circuits for two HEVC tools - inverse transform and deblocking filter are implemented to support Quad-Full HD (4K x 2K) video decoding at 30fps. Techniques to reduce power and area cost are investigated and synthesis results in 40nm CMOS technology and Virtex-6 FPGA platform are presented.by Mehul Tikekar.S.M

    Challenges and solutions in H.265/HEVC for integrating consumer electronics in professional video systems

    Get PDF

    An Analysis of VP8, a new video codec for the web

    Get PDF
    Video is an increasingly ubiquitous part of our lives. Fast and efficient video codecs are necessary to satisfy the increasing demand for video on the web and mobile devices. However, open standards and patent grants are paramount to the adoption of video codecs across different platforms and browsers. Google On2 released VP8 in May 2010 to compete with H.264, the current standard of video codecs, complete with source code, specification and a perpetual patent grant. As the amount of video being created every day is growing rapidly, the decision of which codec to encode this video with is paramount; if a low quality codec or a restrictively licensed codec is used, the video recorded might be of little to no use. We sought to study VP8 and its quality versus its resource consumption compared to H.264 -- the most popular current video codec -- so that reader may make an informed decision for themselves or for their organizations about whether to use H.264 or VP8, or something else entirely. We examined VP8 in detail, compared its theoretical complexity to H.264 and measured the efficiency of its current implementation. VP8 shares many facets of its design with H.264 and other Discrete Cosine Transform (DCT) based video codecs. However, VP8 is both simpler and less feature rich than H.264, which may allow for rapid hardware and software implementations. As it was designed for the Internet and newer mobile devices, it contains fewer legacy features, such as interlacing, than H.264 supports. To perform quality measurements, the open source VP8 implementation libvpx was used. This is the reference implementation. For H.264, the open source H.264 encoder x264 was used. This encoder has very high performance, and is often rated at the top of its field in efficiency. The JM reference encoder was used to establish a baseline quality for H.264. Our findings indicate that VP8 performs very well at low bitrates, at resolutions at and below CIF. VP8 may be able to successfully displace H.264 Baseline in the mobile streaming video domain. It offers higher quality at a lower bitrate for low resolution images due to its high performing entropy coder and non-contiguous macroblock segmentation. At higher resolutions, VP8 still outperforms H.264 Baseline, but H.264 High profile leads. At HD resolution (720p and above), H.264 is significantly better than VP8 due to its superior motion estimation and adaptive coding. There is little significant difference between the intra-coding performance between H.264 and VP8. VP8\u27s in-loop deblocking filter outperforms H.264\u27s version. H.264\u27s inter-coding, with full support for B frames and weighting outperforms VP8\u27s alternate reference scheme, although this may improve in the future. On average, VP8\u27s feature set is less complex than H.264\u27s equivalents, which, along with its open source implementation, may spur development in the future. These findings indicate that VP8 has strong fundamentals when compared with H.264, but that it lacks optimization and maturity. It will likely improve as engineers optimize VP8\u27s reference implementation, or when a competing implementation is developed. We recommend several areas that the VP8 developers should focus on in the future

    Improved intra-prediction for video coding

    Full text link
    This thesis focuses on improving the HEVC (High Efficiency Video Coding) standard. HEVC is the newest video coding standard developed by the ITU-T Video Coding Experts Group (VCEG) and the ISO / IEC Moving Picture Experts Group (MPEG), as a successor to the popular state-of-the-art H.264/MPEG-4 AVC (Advanced Video Coding) standard. HEVC makes use of prediction to exploit redundancies in the signal and therefore achieve high compression efficiency. In particular, the Intra-Picture prediction block consists of predicting a block in the current frame using the reference information from neighbouring blocks in the same frame. It supports three different modes, the angular mode with 33 different directions, the planar mode and DC mode. HEVC is reportedly able to achieve in average more than 50% higher efficiency than H.264/MPEG-4 AVC, but this comes at the cost of very high computational complexity. The contributions of this thesis mainly consist in improvements to the Intra-Picture prediction block, with the goal of drastically reducing computational complexity and, at the same time achieving comparable compression efficiency as conventional HEVC. In average, 16.5% encoding operations can be saved using the proposed approach at the cost of relatively small compression efficiency losses.Éste proyecto se va a centrar en mejorar el estándar HEVC (High Efficiency Video Coding). HEVC es el estándar de codificación de video más reciente desarrollado por el UIT-T Video Coding Experts Group (VCEG) e ISO/IEC Moving Picture Experts Group (MPEG), siendo sucesor del popular estado del arte H.264/MPEG-4 AVC (Advanced Video Coding) estándar . HEVC hace uso de la predicción para aprovechar las redundancias en las señales y por lo tanto conseguir una alta eficiencia de compresión. En particular, el bloque Intra-Picture prediction consiste en predecir un bloque en el cuadro actual, utilizando información de referencia de bloques vecinos en el mismo cuadro. Soporta tres modos distintos, el modo angular con 33 diferentes direcciones, el modo Planar y el modo DC. HEVC es suficientemente capaz de lograr de media una eficiencia mayor del 50% que H.264/MPEG-4 AVC, a costa de una alta complejidad computacional. Las aportaciones a esta tesis consisten principalmente en mejoras en el bloque Intra-Picture prediction, con el objetivo de reducir drásticamente la complejidad computacional y a la vez, lograr una eficiencia de compresión comparable al HEVC convencional. En promedio, un 16.5% de las operaciones de codificación pueden evitarse usando el enfoque propuesto a costa de pérdidas relativamente pequeñas de la eficiencia de compresión

    A reduced reference video quality assessment method for provision as a service over SDN/NFV-enabled networks

    Get PDF
    139 p.The proliferation of multimedia applications and services has generarted a noteworthy upsurge in network traffic regarding video content and has created the need for trustworthy service quality assessment methods. Currently, predominent position among the technological trends in telecommunication networkds are Network Function Virtualization (NFV), Software Defined Networking (SDN) and 5G mobile networks equipped with small cells. Additionally Video Quality Assessment (VQA) methods are a very useful tool for both content providers and network operators, to understand of how users perceive quality and this study the feasibility of potential services and adapt the network available resources to satisfy the user requirements

    Algorithms for compression of high dynamic range images and video

    Get PDF
    The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1. Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment. The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems. Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform. In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented
    corecore