16,816 research outputs found
Optimization of Occlusion-Inducing Depth Pixels in 3-D Video Coding
The optimization of occlusion-inducing depth pixels in depth map coding has
received little attention in the literature, since their associated texture
pixels are occluded in the synthesized view and their effect on the synthesized
view is considered negligible. However, the occlusion-inducing depth pixels
still need to consume the bits to be transmitted, and will induce geometry
distortion that inherently exists in the synthesized view. In this paper, we
propose an efficient depth map coding scheme specifically for the
occlusion-inducing depth pixels by using allowable depth distortions. Firstly,
we formulate a problem of minimizing the overall geometry distortion in the
occlusion subject to the bit rate constraint, for which the depth distortion is
properly adjusted within the set of allowable depth distortions that introduce
the same disparity error as the initial depth distortion. Then, we propose a
dynamic programming solution to find the optimal depth distortion vector for
the occlusion. The proposed algorithm can improve the coding efficiency without
alteration of the occlusion order. Simulation results confirm the performance
improvement compared to other existing algorithms
3D video coding and transmission
The capture, transmission, and display of
3D content has gained a lot of attention in the last few
years. 3D multimedia content is no longer con fined to
cinema theatres but is being transmitted using stereoscopic
video over satellite, shared on Blu-RayTMdisks,
or sent over Internet technologies. Stereoscopic displays
are needed at the receiving end and the viewer needs to
wear special glasses to present the two versions of the
video to the human vision system that then generates
the 3D illusion. To be more e ffective and improve the
immersive experience, more views are acquired from a
larger number of cameras and presented on di fferent displays,
such as autostereoscopic and light field displays.
These multiple views, combined with depth data, also
allow enhanced user experiences and new forms of interaction
with the 3D content from virtual viewpoints.
This type of audiovisual information is represented by a
huge amount of data that needs to be compressed and
transmitted over bandwidth-limited channels. Part of
the COST Action IC1105 \3D Content Creation, Coding
and Transmission over Future Media Networks" (3DConTourNet)
focuses on this research challenge.peer-reviewe
Depth map compression via 3D region-based representation
In 3D video, view synthesis is used to create new virtual views between
encoded camera views. Errors in the coding of the depth maps introduce
geometry inconsistencies in synthesized views. In this paper, a new 3D plane
representation of the scene is presented which improves the performance of
current standard video codecs in the view synthesis domain. Two image segmentation
algorithms are proposed for generating a color and depth segmentation.
Using both partitions, depth maps are segmented into regions without
sharp discontinuities without having to explicitly signal all depth edges. The
resulting regions are represented using a planar model in the 3D world scene.
This 3D representation allows an efficient encoding while preserving the 3D
characteristics of the scene. The 3D planes open up the possibility to code
multiview images with a unique representation.Postprint (author's final draft
- …