215,309 research outputs found

    A spectral analysis for light field rendering

    Get PDF
    Image based rendering using the plenoptic function is an efficient technique for re-rendering at different viewpoints. In this paper, we study the sampling and reconstruction problem of plenoptic function as a multidimensional sampling problem. The spectral support of plenoptic function is found to be an important quantity in the efficient sampling and reconstruction of such function. A spectral analysis for the light field, a 4D plenoptic function, is performed. Its spectrum, as a function of the depth function of the scene, is then derived. This result enables us to estimate the spectral support of the light field given some prior estimate of the depth function. Results using a piecewise constant depth model show significant improvement in rendering of the light field images. The design of the reconstruction filter is also discussed.published_or_final_versio

    A spectral analysis for light field rendering

    Get PDF
    Image based rendering using the plenoptic function is an efficient technique for re-rendering at different viewpoints. In this paper, we study the sampling and reconstruction problem of plenoptic function as a multidimensional sampling problem. The spectral support of plenoptic function is found to be an important quantity in the efficient sampling and reconstruction of such function. A spectral analysis for the light field, a 4D plenoptic function, is performed. Its spectrum, as a function of the depth function of the scene, is then derived. This result enables us to estimate the spectral support of the light field given some prior estimate of the depth function. Results using a piecewise constant depth model show significant improvement in rendering of the light field images. The design of the reconstruction filter is also discussed.published_or_final_versio

    Depth of field guided visualisation on light field displays

    Get PDF
    Light field displays are capable of realistic visualization of arbitrary 3D content. However, due to the finite number of light rays reproduced by the display, its bandwidth is limited in terms of angular and spatial resolution. Consequently, 3D content that falls outside of that bandwidth will cause aliasing during visualization. Therefore, a light field to be visualized must be properly preprocessed. In this thesis, we propose three methods that properly filter the parts in the input light field that would cause aliasing. First method is based on a 2D FIR circular filter that is applied over the 4D light field. Second method utilizes the structured nature of the epipolar plane images representing the light field. Third method adopts real-time multi-layer depth-of-field rendering using tiled splatting. We also establish a connection between lens parameters in the proposed depth-of-field rendering and the display’s bandwidth in order to determine the optimal blurring amount. As we prepare light field for light field displays, a stage is added to the proposed real-time rendering pipeline that simultaneously renders adjacent views. The rendering performance of the proposed methods is demonstrated on Holografika’s Holovizio 722RC projection-based light field display

    Light Field Super-Resolution Via Graph-Based Regularization

    Full text link
    Light field cameras capture the 3D information in a scene with a single exposure. This special feature makes light field cameras very appealing for a variety of applications: from post-capture refocus, to depth estimation and image-based rendering. However, light field cameras suffer by design from strong limitations in their spatial resolution, which should therefore be augmented by computational methods. On the one hand, off-the-shelf single-frame and multi-frame super-resolution algorithms are not ideal for light field data, as they do not consider its particular structure. On the other hand, the few super-resolution algorithms explicitly tailored for light field data exhibit significant limitations, such as the need to estimate an explicit disparity map at each view. In this work we propose a new light field super-resolution algorithm meant to address these limitations. We adopt a multi-frame alike super-resolution approach, where the complementary information in the different light field views is used to augment the spatial resolution of the whole light field. We show that coupling the multi-frame approach with a graph regularizer, that enforces the light field structure via nonlocal self similarities, permits to avoid the costly and challenging disparity estimation step for all the views. Extensive experiments show that the new algorithm compares favorably to the other state-of-the-art methods for light field super-resolution, both in terms of PSNR and visual quality.Comment: This new version includes more material. In particular, we added: a new section on the computational complexity of the proposed algorithm, experimental comparisons with a CNN-based super-resolution algorithm, and new experiments on a third datase

    Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

    Full text link
    Human visual system relies on both binocular stereo cues and monocular focusness cues to gain effective 3D perception. In computer vision, the two problems are traditionally solved in separate tracks. In this paper, we present a unified learning-based technique that simultaneously uses both types of cues for depth inference. Specifically, we use a pair of focal stacks as input to emulate human perception. We first construct a comprehensive focal stack training dataset synthesized by depth-guided light field rendering. We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching. We show how to integrate them into a unified BDfF-Net to obtain high-quality depth maps. Comprehensive experiments show that our approach outperforms the state-of-the-art in both accuracy and speed and effectively emulates human vision systems

    5D Covariance Tracing for Efficient Defocus and Motion Blur

    Get PDF
    The rendering of effects such as motion blur and depth-of-field requires costly 5D integrals. We dramatically accelerate their computation through adaptive sampling and reconstruction based on the prediction of the anisotropy and bandwidth of the integrand. For this, we develop a new frequency analysis of the 5D temporal light-field, and show that first-order motion can be handled through simple changes of coordinates in 5D. We further introduce a compact representation of the spectrum using the co- variance matrix and Gaussian approximations. We derive update equations for the 5 × 5 covariance matrices for each atomic light transport event, such as transport, occlusion, BRDF, texture, lens, and motion. The focus on atomic operations makes our work general, and removes the need for special-case formulas. We present a new rendering algorithm that computes 5D covariance matrices on the image plane by tracing paths through the scene, focusing on the single-bounce case. This allows us to reduce sampling rates when appropriate and perform reconstruction of images with complex depth-of-field and motion blur effects

    An interactive 3D medical visualization system based on a light field display

    Get PDF
    This paper presents a prototype medical data visualization system exploiting a light field display and custom direct volume rendering techniques to enhance understanding of massive volumetric data, such as CT, MRI, and PET scans. The system can be integrated with standard medical image archives and extends the capabilities of current radiology workstations by supporting real-time rendering of volumes of potentially unlimited size on light field displays generating dynamic observer-independent light fields. The system allows multiple untracked naked-eye users in a sufficiently large interaction area to coherently perceive rendered volumes as real objects, with stereo and motion parallax cues. In this way, an effective collaborative analysis of volumetric data can be achieved. Evaluation tests demonstrate the usefulness of the generated depth cues and the improved performance in understanding complex spatial structures with respect to standard techniques.883-893Pubblicat
    corecore