1 research outputs found

    Motion prediction of depth video for depth-image-based rendering using don't care regions

    No full text
    To enable synthesis of any desired intermediate view between two captured views at decoder via depth-image-based rendering (DIBR), both texture and depth maps from the captured viewpoints must be encoded and transmitted in a format known as texture-plus-depth. In this paper, we focus on the compression of depth maps across time to lower the overall bitrate in texture-plus-depth format. We observe that depth maps are not directly viewed, but are only used to provide geometric information of the captured scene for view synthesis at decoder. Thus, as long as the resulting geometric error does not lead to unacceptable synthesized view quality, each depth pixel only needs to be reconstructed at the decoder coarsely within a tolerable range. We first formalize the notion of tolerable range per depth pixel as don't care region (DCR), by studying the synthesized view distortion sensitivity to the pixel value a sensitive depth pixel will have a narrow DCR, and vice versa. Given per-pixel DCRs, we then modify inter-prediction modes during motion prediction to search for a predictor block matching per-pixel DCRs in a target block (rather than the fixed ground truth depth signal in a target block), in order to lower the energy of the prediction residual for the block. We implemented our DCR-based motion prediction scheme inside H.264; our encoded bitstreams remain 100% standard compliant. We show experimentally that our proposed encoding scheme can reduce the bitrate of depth maps coded with baseline H.264 by over 28%. © 2012 IEEE
    corecore