262 research outputs found

    Systematic analysis of the decoding delay in multiview video

    Get PDF
    We present a framework for the analysis of the decoding delay in multiview video coding (MVC). We show that in real-time applications, an accurate estimation of the decoding delay is essential to achieve a minimum communication latency. As opposed to single-view codecs, the complexity of the multiview prediction structure and the parallel decoding of several views requires a systematic analysis of this decoding delay, which we solve using graph theory and a model of the decoder hardware architecture. Our framework assumes a decoder implementation in general purpose multi-core processors with multi-threading capabilities. For this hardware model, we show that frame processing times depend on the computational load of the decoder and we provide an iterative algorithm to compute jointly frame processing times and decoding delay. Finally, we show that decoding delay analysis can be applied to design decoders with the objective of minimizing the communication latency of the MVC system

    Systematic analysis of the decoding delay on MVC decoders

    Get PDF
    We present a framework for the analysis of the decoding delay and communication latency in Multiview Video Coding. The application of this framework on MVC decoders allows minimizing the overall delay in immersive video-conference systems

    Inter-layer turbo coded unequal error protection for multi-layer video transmission

    No full text
    In layered video streaming, the enhancement layers (ELs) must be discarded by the video decoder, when the base layer (BL) is corrupted or lost due to channel impairments. This implies that the transmit power assigned to the ELs is wasted, when the BL is corrupted. To combat this effect, in this treatise we investigate the inter-layer turbo (IL-turbo) code, where the systematic bits of the BL are implanted into the systematic bits of the ELs at the transmitter. At the receiver, when the BL cannot be successfully decoded, the information of the ELs may be utilized by the IL-turbo decoder for the sake of assisting in decoding the BL. Moreover, for providing further insights into the IL technique the benefits of the IL-turbo scheme are analyzed using extrinsic information transfer (EXIT) charts in the scenario of unequal error protection (UEP) coded layered video transmission. Finally, our data partitioning based experiments show that the proposed scheme outperforms the traditional turbo code based UEP scheme by about an Eb/N0 of 1.1 dB at a peak signal-to-noise ratio (PSNR) of 36 dB or 3 dB of PSNR at an Eb/N0 of -5.5 dB at the cost of a complexity increase of 13%

    Research and developments of distributed video coding

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The recent developed Distributed Video Coding (DVC) is typically suitable for the applications such as wireless/wired video sensor network, mobile camera etc. where the traditional video coding standard is not feasible due to the constrained computation at the encoder. With DVC, the computational burden is moved from encoder to decoder. The compression efficiency is achieved via joint decoding at the decoder. The practical application of DVC is referred to Wyner-Ziv video coding (WZ) where the side information is available at the decoder to perform joint decoding. This join decoding inevitably causes a very complex decoder. In current WZ video coding issues, many of them emphasise how to improve the system coding performance but neglect the huge complexity caused at the decoder. The complexity of the decoder has direct influence to the system output. The beginning period of this research targets to optimise the decoder in pixel domain WZ video coding (PDWZ), while still achieves similar compression performance. More specifically, four issues are raised to optimise the input block size, the side information generation, the side information refinement process and the feedback channel respectively. The transform domain WZ video coding (TDWZ) has distinct superior performance to the normal PDWZ due to the exploitation in spatial direction during the encoding. However, since there is no motion estimation at the encoder in WZ video coding, the temporal correlation is not exploited at all at the encoder in all current WZ video coding issues. In the middle period of this research, the 3D DCT is adopted in the TDWZ to remove redundancy in both spatial and temporal direction thus to provide even higher coding performance. In the next step of this research, the performance of transform domain Distributed Multiview Video Coding (DMVC) is also investigated. Particularly, three types transform domain DMVC frameworks which are transform domain DMVC using TDWZ based 2D DCT, transform domain DMVC using TDWZ based on 3D DCT and transform domain residual DMVC using TDWZ based on 3D DCT are investigated respectively. One of the important applications of WZ coding principle is error-resilience. There have been several attempts to apply WZ error-resilient coding for current video coding standard e.g. H.264/AVC or MEPG 2. The final stage of this research is the design of WZ error-resilient scheme for wavelet based video codec. To balance the trade-off between error resilience ability and bandwidth consumption, the proposed scheme emphasises the protection of the Region of Interest (ROI) area. The efficiency of bandwidth utilisation is achieved by mutual efforts of WZ coding and sacrificing the quality of unimportant area. In summary, this research work contributed to achieves several advances in WZ video coding. First of all, it is targeting to build an efficient PDWZ with optimised decoder. Secondly, it aims to build an advanced TDWZ based on 3D DCT, which then is applied into multiview video coding to realise advanced transform domain DMVC. Finally, it aims to design an efficient error-resilient scheme for wavelet video codec, with which the trade-off between bandwidth consumption and error-resilience can be better balanced

    Distributed Video Coding: Selecting the Most Promising Application Scenarios

    Get PDF
    Distributed Video Coding (DVC) is a new video coding paradigm based on two major Information Theory results: the Slepian–Wolf and Wyner–Ziv theorems. Recently, practical DVC solutions have been proposed with promising results; however, there is still a need to study in a more systematic way the set of application scenarios for which DVC may bring major advantages. This paper intends to contribute for the identification of the most DVC friendly application scenarios, highlighting the expected benefits and drawbacks for each studied scenario. This selection is based on a proposed methodology which involves the characterization and clustering of the applications according to their most relevant characteristics, and their matching with the main potential DVC benefits

    Distributed Video Coding for Multiview and Video-plus-depth Coding

    Get PDF

    FVV Live: A real-time free-viewpoint video system with consumer electronics hardware

    Full text link
    FVV Live is a novel end-to-end free-viewpoint video system, designed for low cost and real-time operation, based on off-the-shelf components. The system has been designed to yield high-quality free-viewpoint video using consumer-grade cameras and hardware, which enables low deployment costs and easy installation for immersive event-broadcasting or videoconferencing. The paper describes the architecture of the system, including acquisition and encoding of multiview plus depth data in several capture servers and virtual view synthesis on an edge server. All the blocks of the system have been designed to overcome the limitations imposed by hardware and network, which impact directly on the accuracy of depth data and thus on the quality of virtual view synthesis. The design of FVV Live allows for an arbitrary number of cameras and capture servers, and the results presented in this paper correspond to an implementation with nine stereo-based depth cameras. FVV Live presents low motion-to-photon and end-to-end delays, which enables seamless free-viewpoint navigation and bilateral immersive communications. Moreover, the visual quality of FVV Live has been assessed through subjective assessment with satisfactory results, and additional comparative tests show that it is preferred over state-of-the-art DIBR alternatives
    corecore