1 research outputs found

    Wavelet-Based Multi-View Video Coding with Spatial Scalability

    No full text
    Abstract — In this paper, we propose two wavelet-based frameworks which allow fully scalable multi-view video coding. Using a 4-D wavelet transform, both schemes generate a bitstream that can be truncated to achieve a temporally, view-directionally, and/or spatially downscaled representation of the coded multiview video sequence. Well-known wavelet-based scalable coding schemes for single-view video sequences have been adopted and extended to match the specific needs of scalable multi-view video coding. Motion compensated temporal filtering (MCTF) is applied to each video sequence of each camera to exploit temporal correlation and inter-view dependencies are exploited with disparity compensated view filtering (DCVF). A spatial wavelet transform is utilized either before and after temporal-viewdirectional decomposition (2D+T+V+2D scheme) or only after the temporal-view-directional decomposition (T+V+2D scheme) for spatial decorrelation. The influence of the two different approaches on spatial scalability is shown in this paper as well as the superior coding efficiency of both codecs compared with simulcast coding. I
    corecore