193 research outputs found

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Compensating for motion estimation inaccuracies in DVC

    Get PDF
    Distributed video coding is a relatively new video coding approach, where compression is achieved by performing motion estimation at the decoder. Current techniques for decoder-side motion estimation make use of assumptions such as linear motion between the reference frames. It is only after the frame is partially decoded that some of the errors are corrected. In this paper, we propose a new approach with multiple predictors, accounting for inaccuracies in the decoder-side motion estimation process during the decoding. Each of the predictors is assigned a weight, and the correlation between the original frame at the encoder and the set of predictors at the decoder is modeled at the decoder. This correlation information is then used during the decoding process. Results indicate average quality gains up to 0.4 dB

    Distributed Video Coding for Resource Critical Applocations

    Get PDF

    Hierarchical motion estimation for side information creation in Wyner-Ziv video coding

    Full text link
    Recently, several video coding solutions based on the distributed source coding paradigm have appeared in the literature. Among them, Wyner-Ziv video coding schemes enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill requirements of emerging applications such as visual sensor networks and wireless surveillance. To achieve a performance comparable to the predictive video coding solutions, it is necessary to increase the quality of the side information, this means the estimation of the original frame created at the decoder. In this paper, a hierarchical motion estimation (HME) technique using different scales and increasingly smaller block sizes is proposed to generate a more reliable estimation of the motion field. The HME technique is integrated in a well known motion compensated frame interpolation framework responsible for the creation of the side information in a Wyner-Ziv video decoder. The proposed technique enables to achieve improvements in the rate-distortion (RD) performance up to 7 dB when compared to H.263+ Intra and 3 dB when compared to H.264/AVC Intra

    Side Information Generation in Distributed Video Coding

    Get PDF
    Distributed Video Coding (DVC) coding paradigm is based largely on two theorems of Information Theory and Coding, which are Slepian-wolf theorem and Wyner-Ziv theorem that were introduced in 1973 and 1976 respectively. DVC bypasses the need of performing Motion Compensation (MC) and Motion Estimation (ME) which are largely responsible for the complex encoder in devices. DVC instead relies on exploiting the source statistics, totally/partially, at only the decoder. Wyner-Ziv coding, a particular case of DVC, which is explored in detail in this thesis. In this scenario, two correlated sources are independently encoded, while the encoded streams are decoded jointly at the single decoder exploiting the correlation between them. Although the distributed coding study dates back to 1970’s, but the practical efforts and developments in the field began only last decade. Upcoming applications (like those of video surveillance, mobile camera, wireless sensor networks) can rely on DVC, as they don’t have high computational capabilities and/or high storage capacity. Current coding paradigms, MPEG-x and H.26x standards, predicts the frame by means of Motion Compensation and Motion Estimation which leads to highly complex encoder. Whilst in WZ coding, the correlation between temporally adjacent frames is performed only at the decoder, which results in fairly low complex encoder. The main objective of the current thesis is to investigate for an improved scheme for Side Information (SI) generation in DVC framework. SI frames, available at the decoder are generated through the means of Radial Basis Function Network (RBFN) neural network. Frames are estimated from decoded key frames block-by-block. RBFN network is trained offline using training patterns from different frames collected from standard video sequences
    corecore