171 research outputs found

    On the rendering and post-processing of simplified dynamic light fields with depth information

    Get PDF
    This paper studies the rendering and post-processing of a dynamic image-based representation called the simplified dynamic light fields (SDLF) (or plenoptic videos) with depth information. The user viewpoints are limited to a camera line to simplify the capturing and compression processes. By associating each image pixel with its depth value, methods for improving the rendering quality and detecting occlusions are proposed. Due to the limited sampling at depth discontinuities, adaptive lowpass filtering is applied to the detected occluded regions near object boundaries in order to suppress the aliasing artifacts. Rendering results using computer-generated images show that considerable improvement in rendering quality even for dynamic scenes with large depth variations.published_or_final_versio

    An object-based approach to plenoptic videos

    Get PDF
    This paper proposes an object-based approach to plenoptic videos, where the plenoptic video sequences are segmented into image-based rendering (IBR) objects each with its image sequence, depth map and other relevant information such as shape information. This allows desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects to be supported. A portable capturing system consisting of two linear camera arrays, each hosting 6 JVC video cameras, was developed to verify the proposed approach. Rendering and compression results of real-world scenes demonstrate the usefulness and good quality of the proposed approach. © 2005 IEEE.published_or_final_versio

    Survey of image-based representations and compression techniques

    Get PDF
    In this paper, we survey the techniques for image-based rendering (IBR) and for compressing image-based representations. Unlike traditional three-dimensional (3-D) computer graphics, in which 3-D geometry of the scene is known, IBR techniques render novel views directly from input images. IBR techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative techniques. IBR techniques demonstrate a surprising diverse range in their extent of use of images and geometry in representing 3-D scenes. We explore the issues in trading off the use of images and geometry by revisiting plenoptic-sampling analysis and the notions of view dependency and geometric proxies. Finally, we highlight compression techniques specifically designed for image-based representations. Such compression techniques are important in making IBR techniques practical.published_or_final_versio

    A spectral analysis for light field rendering

    Get PDF
    Image based rendering using the plenoptic function is an efficient technique for re-rendering at different viewpoints. In this paper, we study the sampling and reconstruction problem of plenoptic function as a multidimensional sampling problem. The spectral support of plenoptic function is found to be an important quantity in the efficient sampling and reconstruction of such function. A spectral analysis for the light field, a 4D plenoptic function, is performed. Its spectrum, as a function of the depth function of the scene, is then derived. This result enables us to estimate the spectral support of the light field given some prior estimate of the depth function. Results using a piecewise constant depth model show significant improvement in rendering of the light field images. The design of the reconstruction filter is also discussed.published_or_final_versio

    A spectral analysis for light field rendering

    Get PDF
    Image based rendering using the plenoptic function is an efficient technique for re-rendering at different viewpoints. In this paper, we study the sampling and reconstruction problem of plenoptic function as a multidimensional sampling problem. The spectral support of plenoptic function is found to be an important quantity in the efficient sampling and reconstruction of such function. A spectral analysis for the light field, a 4D plenoptic function, is performed. Its spectrum, as a function of the depth function of the scene, is then derived. This result enables us to estimate the spectral support of the light field given some prior estimate of the depth function. Results using a piecewise constant depth model show significant improvement in rendering of the light field images. The design of the reconstruction filter is also discussed.published_or_final_versio

    Multiple image view synthesis for free viewpoint video applications

    Get PDF
    Interactive audio-visual (AV) applications such as free viewpoint video (FVV) aim to enable unrestricted spatio-temporal navigation within multiple camera environments. Current virtual viewpoint view synthesis solutions for FVV are either purely image-based implying large information redundancy; or involve reconstructing complex 3D models of the scene. In this paper we present a new multiple image view synthesis algorithm that only requires camera parameters and disparity maps. The multi-view synthesis (MVS) approach can be used in any multi-camera environment and is scalable as virtual views can be created given 1 to N of the available video inputs, providing a means to gracefully handle scenarios where camera inputs decrease or increase over time. The algorithm identifies and selects only the best quality surface areas from available reference images, thereby reducing perceptual errors in virtual view reconstruction. Experimental results are presented and verified using both objective (PSNR) and subjective comparisons

    On object-based compression for a class of dynamic image-based representations

    Get PDF
    An object-based compression scheme for a class of dynamic image-based representations called "plenoptic videos" (PVs) is studied in this paper. PVs are simplified dynamic light fields in which the videos are taken at regularly spaced locations along a line segment instead of a 2-D plane. To improve the rendering quality in scenes with large depth variations and support the functionalities at the object level for rendering, an object-based compression scheme is employed for the coding of PVs. Besides texture and shape information, the compression of geometry information in the form of depth maps is also supported. The proposed compression scheme exploits both the temporal and spatial redundancy among video object streams in the PV to achieve higher compression efficiency. Experimental results show that considerable improvements in coding performance are obtained for both synthetic and real scenes. Moreover, object-based functionalities such as rendering individual image-based objects are also illustrated. © 2005 IEEE.published_or_final_versio

    Capturing the plenoptic function in a swipe

    Get PDF
    corecore