202 research outputs found

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Decoder-driven mode decision in a block-based distributed video codec

    Get PDF
    Distributed Video Coding (DVC) is a video coding paradigm in which the computational complexity is shifted from the encoder to the decoder. DVC is based on information theoretic results suggesting that, under ideal conditions, the same rate-distortion performance can be achieved as for traditional video codecs. In practice however, there is still a significant performance gap between the two coding architectures. One of the main reasons for this gap is the lack of multiple coding modes in current DVC solutions. In this paper, we propose a block-based distributed video codec that supports three coding modes: Wyner-Ziv, skip, and intra. The mode decision process is entirely decoder-driven. Skip blocks are selected based on the estimated accuracy of the side information. The choice between intra and Wyner-Ziv coding modes is made on a rate-distortion basis, by selecting the coding mode with the lowest rate while assuring equal distortion for both modes. Experimental results illustrate that the proposed block-based architecture has some advantages over classical bitplane-based approaches. Introducing skip and intra coded blocks yields average bitrate gains of up to 33.7% over our basic configuration supporting Wyner-Ziv mode only, and up to 29.7% over the reference bitplane-based DISCOVER codec

    Flexible distribution of complexity by hybrid predictive-distributed video coding

    Get PDF
    There is currently limited flexibility for distributing complexity in a video coding system. While rate-distortion-complexity (RDC) optimization techniques have been proposed for conventional predictive video coding with encoder-side motion estimation, they fail to offer true flexible distribution of complexity between encoder and decoder since the encoder is assumed to have always more computational resources available than the decoder. On the other hand, distributed video coding solutions with decoder-side motion estimation have been proposed, but hardly any RDC optimized systems have been developed. To offer more flexibility for video applications involving multi-tasking or battery-constrained devices, in this paper, we propose a codec combining predictive video coding concepts and techniques from distributed video coding and show the flexibility of this method in distributing complexity. We propose several modes to code frames, and provide complexity analysis illustrating encoder and decoder computational complexity for each mode. Rate distortion results for each mode indicate that the coding efficiency is similar. We describe a method to choose which mode to use for coding each inter frame, taking into account encoder and decoder complexity constraints, and illustrate how complexity is distributed more flexibly

    Distributed Video Coding: Iterative Improvements

    Get PDF
    • …
    corecore