20 research outputs found

    Depth map compression via 3D region-based representation

    Get PDF
    In 3D video, view synthesis is used to create new virtual views between encoded camera views. Errors in the coding of the depth maps introduce geometry inconsistencies in synthesized views. In this paper, a new 3D plane representation of the scene is presented which improves the performance of current standard video codecs in the view synthesis domain. Two image segmentation algorithms are proposed for generating a color and depth segmentation. Using both partitions, depth maps are segmented into regions without sharp discontinuities without having to explicitly signal all depth edges. The resulting regions are represented using a planar model in the 3D world scene. This 3D representation allows an efficient encoding while preserving the 3D characteristics of the scene. The 3D planes open up the possibility to code multiview images with a unique representation.Postprint (author's final draft

    Optimized Data Representation for Interactive Multiview Navigation

    Get PDF
    In contrary to traditional media streaming services where a unique media content is delivered to different users, interactive multiview navigation applications enable users to choose their own viewpoints and freely navigate in a 3-D scene. The interactivity brings new challenges in addition to the classical rate-distortion trade-off, which considers only the compression performance and viewing quality. On the one hand, interactivity necessitates sufficient viewpoints for richer navigation; on the other hand, it requires to provide low bandwidth and delay costs for smooth navigation during view transitions. In this paper, we formally describe the novel trade-offs posed by the navigation interactivity and classical rate-distortion criterion. Based on an original formulation, we look for the optimal design of the data representation by introducing novel rate and distortion models and practical solving algorithms. Experiments show that the proposed data representation method outperforms the baseline solution by providing lower resource consumptions and higher visual quality in all navigation configurations, which certainly confirms the potential of the proposed data representation in practical interactive navigation systems

    Segmentation-based Method of Increasing The Depth Maps Temporal Consistency

    Get PDF
    In this paper, a modification of the graph-based depth estimation is presented. The purpose of proposed modification is to increase the quality of estimated depth maps, reduce the time of the estimation, and increase the temporal consistency of depth maps. The modification is based on the image segmentation using superpixels, therefore in the first step of the proposed modification a segmentation of previous frames is used in the currently processed frame in order to reduce the overall time of the depth estimation. In the next step, a depth map from the previous frame is used in the depth map optimization as the initial values of a depth map estimated for the current frame. It results in the better representation of silhouettes of objects in depth maps and in the reduced computational complexity of the depth estimation process. In order to evaluate the performance of the proposed modification the authors performed the experiment for a set of multiview test sequences that varied in their content and an arrangement of cameras. The results of the experiments confirmed the increase of the depth maps quality — the quality of depth maps calculated with the proposed modification is higher than for the unmodified depth estimation method, apart from the number of the performed optimization cycles. Therefore, use of the proposed modification allows to estimate a depth of the better quality with almost 40% reduction of the estimation time. Moreover, the temporal consistency, measured through the reduction of the bitrate of encoded virtual views, was also considerably increased

    3D video compression based on high efficiency video coding

    Get PDF
    With the advent of autostereoscopic displays, questions rise on how to efficiently compress the video information needed by such displays. Additionally, for gradual market acceptance of this new technology it is valuable to have a solution offering forward compatibility with stereo 3D video as it is used nowadays. In this paper, a multiview compression scheme making use of the efficient single-view coding tools used in High Efficiency Video Coding (HEVC) is provided. Although efficient single view compression can be obtained with HEVC, a multiview adaptation of this standard under development is proposed, offering additional coding gains. On average, for the texture information, the total bitrate can be reduced by 37.2% compared to simulcast HEVC. For depth map compression, gains largely depend on the quality of the captured content. Additionally, a forward compatible solution is proposed offering the possibility for a gradual upgrade from H.264/AVC based stereoscopic 3D systems to an HEVC-based autostereoscopic environment. With the proposed system, significant rate savings compared to Multiview Video Coding (MVC) are presented(1)
    corecore