43 research outputs found

    Self-Supervised Light Field Reconstruction Using Shearlet Transform and Cycle Consistency

    Get PDF
    The image-based rendering approach using Shearlet Transform (ST) is one of the state-of-the-art Densely-Sampled Light Field (DSLF) reconstruction methods. It reconstructs Epipolar-Plane Images (EPIs) in image domain via an iterative regularization algorithm restoring their coefficients in shearlet domain. Consequently, the ST method tends to be slow because of the time spent on domain transformations for dozens of iterations. To overcome this limitation, this letter proposes a novel self-supervised DSLF reconstruction method, CycleST, which applies ST and cycle consistency to DSLF reconstruction. Specifically, CycleST is composed of an encoder-decoder network and a residual learning strategy that restore the shearlet coefficients of densely-sampled EPIs using EPI reconstruction and cycle consistency losses. Besides, CycleST is a self-supervised approach that can be trained solely on Sparsely-Sampled Light Fields (SSLFs) with small disparity ranges (⩽\leqslant 8 pixels). Experimental results of DSLF reconstruction on SSLFs with large disparity ranges (16 - 32 pixels) from two challenging real-world light field datasets demonstrate the effectiveness and efficiency of the proposed CycleST method. Furthermore, CycleST achieves ~ 9x speedup over ST, at least

    Antialiasing Filtering for Projection-Based Light Field Displays

    Get PDF
    Projection-based light field displays can achieve realistic visualization of a 3D scene. However, these displays can reproduce only a finite number of light rays, thus their bandwidth is limited in terms of angular and spatial resolution. Consequently, a display cannot show parts of the 3D scene that falls outside of its bandwidth region without aliasing distortion. Therefore, light fields should be properly pre-processed before visualizing them on a light field display. In this paper, we develop two methods for designing antialiasing filters that will either remove or blur parts of the scene in the input light field that causes aliasing. We illustrate the effectiveness of the proposed methods by comparing the visualized light fields on a projection-based light field display before and after applying the designed antialiasing filters.Peer reviewe

    Optimized 3D Scene Rendering on Projection-Based 3D Displays

    Get PDF
    We address the problem of 3D scene rendering on projection-based light field displays and optimizing the input display images to obtain the best possible visual output. We discuss a display model comprising a set of projectors, an anisotropic diffuser and a viewing manifold. Based on this model, we render an initial set of projector images to be further optimized for the best perception at a specified set of viewing positions. We propose a least squares method, which minimizes the channel-wise color difference between the generated images for different viewer positions, and their ground-true counterparts. We formulate a constrained optimization problem and solve it iteratively by the descent method.acceptedVersionPeer reviewe

    Densely-sampled light field reconstruction

    Get PDF
    In this chapter, we motivate the use of densely-sampled light fields as the representation which can bring the required density of light rays for the correct recreation of 3D visual cues such as focus and continuous parallax and can serve as an intermediary between light field sensing and light field display. We consider the problem of reconstructing such a representation from few camera views and approach it in a sparsification framework. More specifically, we demonstrate that the light field is well structured in the set of so-called epipolar images and can be sparsely represented by a dictionary of directional and multi-scale atoms called shearlets. We present the corresponding regularization method, along with its main algorithm and speed-accelerating modifications. Finally, we illustrate its applicability for the cases of holographic stereograms and light field compression.acceptedVersionPeer reviewe

    A Framework for Assessing Rendering Techniques for Near-Eye Integral Imaging Displays

    Get PDF
    We address the problem of 3D scene rendering on near-eye integral imaging displays and evaluation of different rendering methods in terms of human perception. We compare three rendering techniques in terms of perceived spatial resolution at different focused depths, simulating the display in virtual environment and representing the eye through a thin-lens camera model.acceptedVersionPeer reviewe

    On the passband of head-parallax displays

    Get PDF
    acceptedVersionPeer reviewe
    corecore