42,629 research outputs found

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames

    Computer Vision and Image Understanding xxx

    Get PDF
    Abstract 12 A compact visual representation, called the 3D layered, adaptive-resolution, and multi-13 perspective panorama (LAMP), is proposed for representing large-scale 3D scenes with large 14 variations of depths and obvious occlusions. Two kinds of 3D LAMP representations are 15 proposed: the relief-like LAMP and the image-based LAMP. Both types of LAMPs con-16 cisely represent almost all the information from a long image sequence. Methods to con-17 struct LAMP representations from video sequences with dominant translation are 18 provided. The relief-like LAMP is basically a single extended multi-perspective panoramic 19 view image. Each pixel has a pair of texture and depth values, but each pixel may also have 20 multiple pairs of texture-depth values to represent occlusion in layers, in addition to adap-21 tive resolution changing with depth. The image-based LAMP, on the other hand, consists of 22 a set of multi-perspective layers, each of which has a pair of 2D texture and depth maps, 23 but with adaptive time-sampling scales depending on depths of scene points. Several exam-24 ples of 3D LAMP construction for real image sequences are given. The 3D LAMP is a con-25 cise and powerful representation for image-based rendering. 2

    Rendering Geometry with Relief Textures

    Get PDF
    International audienceWe propose to render geometry using an image based representation. Geometric information is encoded by a texture with depth and rendered by rasterizing the bounding box geometry. For each resulting fragment, a shader computes the intersection of the corresponding ray with the geometry using pre-computed information to accelerate the computation. Great care is taken to be artifact free even when zoomed in or at grazing angles. We integrate our algorithm with reverse perspective projection to represent a larger class of shapes. The extra texture requirement is small and the rendering cost is output sensitive so our representation can be used to model many parts of a 3D scene

    5D Covariance Tracing for Efficient Defocus and Motion Blur

    Get PDF
    The rendering of effects such as motion blur and depth-of-field requires costly 5D integrals. We dramatically accelerate their computation through adaptive sampling and reconstruction based on the prediction of the anisotropy and bandwidth of the integrand. For this, we develop a new frequency analysis of the 5D temporal light-field, and show that first-order motion can be handled through simple changes of coordinates in 5D. We further introduce a compact representation of the spectrum using the co- variance matrix and Gaussian approximations. We derive update equations for the 5 × 5 covariance matrices for each atomic light transport event, such as transport, occlusion, BRDF, texture, lens, and motion. The focus on atomic operations makes our work general, and removes the need for special-case formulas. We present a new rendering algorithm that computes 5D covariance matrices on the image plane by tracing paths through the scene, focusing on the single-bounce case. This allows us to reduce sampling rates when appropriate and perform reconstruction of images with complex depth-of-field and motion blur effects

    Interactive Vegetation Rendering with Slicing and Blending

    Get PDF
    Detailed and interactive 3D rendering of vegetation is one of the challenges of traditional polygon-oriented computer graphics, due to large geometric complexity even of simple plants. In this paper we introduce a simplified image-based rendering approach based solely on alpha-blended textured polygons. The simplification is based on the limitations of human perception of complex geometry. Our approach renders dozens of detailed trees in real-time with off-the-shelf hardware, while providing significantly improved image quality over existing real-time techniques. The method is based on using ordinary mesh-based rendering for the solid parts of a tree, its trunk and limbs. The sparse parts of a tree, its twigs and leaves, are instead represented with a set of slices, an image-based representation. A slice is a planar layer, represented with an ordinary alpha or color-keyed texture; a set of parallel slices is a slicing. Rendering from an arbitrary viewpoint in a 360 degree circle around the center of a tree is achieved by blending between the nearest two slicings. In our implementation, only 6 slicings with 5 slices each are sufficient to visualize a tree for a moving or stationary observer with the perceptually similar quality as the original model
    corecore