8 research outputs found

    Joint Reconstruction of Multi-view Compressed Images

    Full text link
    The distributed representation of correlated multi-view images is an important problem that arise in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed correlated images are jointly decoded in order to improve the reconstruction quality of all the compressed images. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG, H.264 intra) with a balanced rate distribution among different cameras. A central decoder first estimates the underlying correlation model from the independently compressed images which will be used for the joint signal recovery. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images that comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be consistent with their compressed versions. We show by experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our proposed algorithm compares advantageously to state-of-the-art distributed coding schemes based on disparity learning and on the DISCOVER

    A Distributed Video Coding System for Multi View Video Plus Depth

    Get PDF
    Multi-view video plus depth (MVD) is gathering huge attention, as witnessed by the recent standardization activity, since its rich information about the geometry of the scene allows high-quality synthesis of virtual viewpoints. Distributed video coding of such kind of content is a challenging problem whose solution could enable new services as interactive multi-view streaming. In this work we propose to exploit the geometrical information of the MVD format in order to estimate inter-view occlusions without communication among cameras. Experimental results show a bit rate reduction up to 77% for low bit rate w.r.t. state-of-the-art architectures

    Fusion of Global and Local Motion Estimation Using Foreground Objects for Distributed Video Coding

    Get PDF
    International audienceThe side information in distributed video coding is estimated using the available decoded frames, and exploited for the decoding and reconstruction of other frames. The quality of the side information has a strong impact on the performance of distributed video coding. Here we propose a new approach that combines both global and local side information to improve coding performance. Since the background pixels in a frame are assigned to global estimation and the foreground objects to local estimation, one needs to estimate foreground objects in the side information using the backward and forward foreground objects, The background pixels are directly taken from the global side information. Specifically, elastic curves and local motion compensation are used to generate the foreground objects masks in the side information. Experimental results show that, as far as the rate-distortion performance is concerned, the proposed approach can achieve a PSNR improvement of up to 1.39 dB for a GOP size of 2, and up to 4.73 dB for larger GOP sizes, with respect to the reference DISCOVER codec. Index Terms A. ABOU-ELAILAH, F. DUFAUX, M. CAGNAZZO, and B. PESQUET-POPESCU are with the Signal and Image Processin

    Distributed Video Coding for Multiview and Video-plus-depth Coding

    Get PDF

    Fusion schemes for multiview distributed video coding

    No full text
    Distributed video coding performances strongly depend on the side information quality, built at the decoder. In multi-view schemes, correlations in both time and view directions are exploited, obtaining in general two estimations that need to be merged. This step, called fusion, greatly affects the performance of the coding scheme; however, the existing methods do not achieve acceptable performances in all cases, especially when one of the estimations is not of good quality, since in this case they are not able to discard it. This paper provides a detailed review of existing fusion methods between temporal and inter-view side information, and proposes new promising techniques. Experimental results show that these methods have good performances in a variety of configurations. © EURASIP, 2009

    FUSION SCHEMES FOR MULTIVIEW DISTRIBUTED VIDEO CODING

    Get PDF
    Distributed video coding performances strongly depend on the side information quality, built at the decoder. In multi-view schemes, correlations in both time and view directions are exploited, obtaining in general two estimations that need to be merged. This step, called fusion, greatly affects the performance of the coding scheme; however, the existing methods do not achieve acceptable performances in all cases, especially when one of the estimations is not of good quality, since in this case they are not able to discard it. This paper provides a detailed review of existing fusion methods between temporal and inter-view side information, and proposes new promising techniques. Experimental results show that these methods have good performances in a variety of configurations. 1
    corecore