12 research outputs found

    Densely-sampled light field reconstruction

    Get PDF
    In this chapter, we motivate the use of densely-sampled light fields as the representation which can bring the required density of light rays for the correct recreation of 3D visual cues such as focus and continuous parallax and can serve as an intermediary between light field sensing and light field display. We consider the problem of reconstructing such a representation from few camera views and approach it in a sparsification framework. More specifically, we demonstrate that the light field is well structured in the set of so-called epipolar images and can be sparsely represented by a dictionary of directional and multi-scale atoms called shearlets. We present the corresponding regularization method, along with its main algorithm and speed-accelerating modifications. Finally, we illustrate its applicability for the cases of holographic stereograms and light field compression.acceptedVersionPeer reviewe

    Large-Scale Light Field Capture and Reconstruction

    Get PDF
    This thesis discusses approaches and techniques to convert Sparsely-Sampled Light Fields (SSLFs) into Densely-Sampled Light Fields (DSLFs), which can be used for visualization on 3DTV and Virtual Reality (VR) devices. Exemplarily, a movable 1D large-scale light field acquisition system for capturing SSLFs in real-world environments is evaluated. This system consists of 24 sparsely placed RGB cameras and two Kinect V2 sensors. The real-world SSLF data captured with this setup can be leveraged to reconstruct real-world DSLFs. To this end, three challenging problems require to be solved for this system: (i) how to estimate the rigid transformation from the coordinate system of a Kinect V2 to the coordinate system of an RGB camera; (ii) how to register the two Kinect V2 sensors with a large displacement; (iii) how to reconstruct a DSLF from a SSLF with moderate and large disparity ranges. To overcome these three challenges, we propose: (i) a novel self-calibration method, which takes advantage of the geometric constraints from the scene and the cameras, for estimating the rigid transformations from the camera coordinate frame of one Kinect V2 to the camera coordinate frames of 12-nearest RGB cameras; (ii) a novel coarse-to-fine approach for recovering the rigid transformation from the coordinate system of one Kinect to the coordinate system of the other by means of local color and geometry information; (iii) several novel algorithms that can be categorized into two groups for reconstructing a DSLF from an input SSLF, including novel view synthesis methods, which are inspired by the state-of-the-art video frame interpolation algorithms, and Epipolar-Plane Image (EPI) inpainting methods, which are inspired by the Shearlet Transform (ST)-based DSLF reconstruction approaches

    Self-Supervised Light Field Reconstruction Using Shearlet Transform and Cycle Consistency

    Get PDF
    The image-based rendering approach using Shearlet Transform (ST) is one of the state-of-the-art Densely-Sampled Light Field (DSLF) reconstruction methods. It reconstructs Epipolar-Plane Images (EPIs) in image domain via an iterative regularization algorithm restoring their coefficients in shearlet domain. Consequently, the ST method tends to be slow because of the time spent on domain transformations for dozens of iterations. To overcome this limitation, this letter proposes a novel self-supervised DSLF reconstruction method, CycleST, which applies ST and cycle consistency to DSLF reconstruction. Specifically, CycleST is composed of an encoder-decoder network and a residual learning strategy that restore the shearlet coefficients of densely-sampled EPIs using EPI reconstruction and cycle consistency losses. Besides, CycleST is a self-supervised approach that can be trained solely on Sparsely-Sampled Light Fields (SSLFs) with small disparity ranges (\leqslant 8 pixels). Experimental results of DSLF reconstruction on SSLFs with large disparity ranges (16 - 32 pixels) from two challenging real-world light field datasets demonstrate the effectiveness and efficiency of the proposed CycleST method. Furthermore, CycleST achieves ~ 9x speedup over ST, at least

    Depth of field guided visualisation on light field displays

    Get PDF
    Light field displays are capable of realistic visualization of arbitrary 3D content. However, due to the finite number of light rays reproduced by the display, its bandwidth is limited in terms of angular and spatial resolution. Consequently, 3D content that falls outside of that bandwidth will cause aliasing during visualization. Therefore, a light field to be visualized must be properly preprocessed. In this thesis, we propose three methods that properly filter the parts in the input light field that would cause aliasing. First method is based on a 2D FIR circular filter that is applied over the 4D light field. Second method utilizes the structured nature of the epipolar plane images representing the light field. Third method adopts real-time multi-layer depth-of-field rendering using tiled splatting. We also establish a connection between lens parameters in the proposed depth-of-field rendering and the display’s bandwidth in order to determine the optimal blurring amount. As we prepare light field for light field displays, a stage is added to the proposed real-time rendering pipeline that simultaneously renders adjacent views. The rendering performance of the proposed methods is demonstrated on Holografika’s Holovizio 722RC projection-based light field display

    Densely Sampled Light Field Reconstruction

    No full text
    The emerging light-field and holographic displays aim at providing an immersive visual experience, which in turn requires processing a substantial amount of visual information. In this endeavour, the concept of plenoptic or light-field function plays a very important role as it quantifies the light coming from a visual scene through the multitude of rays going in any direction, at any intensity and at any instant in time. Such a comprehensive function is multi-dimensional and highly redundant at the same time, which raises the problem of its accurate sampling and reconstruction. In this thesis, we develop a novel method for light field reconstruction from a limited number of multi-perspective images (views). First, we formalize the light field function in the epipolar image domain in terms of a directional frame representation. We construct a frame (i.e. a dictionary) based on the previously developed shearlet system. The constructed dictionary efficiently represents the structural properties of the continuous light field function. This allows us to formulate the light field reconstruction problem as a variational optimization problem with a sparsity constraint. Second, we develop an iterative optimization procedure by adapting the variational in-painting method originally developed for 2D image reconstruction. The designed algorithm employs an iterative thresholding and yields an accurate reconstruction using a relatively sparse set of samples in the angular domain. Finally, we extended the method using various acceleration approaches. More specifically, we improve its robustness by an additional overrelaxation step and make use of the redundancy between different color channels and between epipolar images through colorization and wavelet decomposition techniques. Extensive experiments have demonstrated that these methods constitute the state of the art for light field reconstruction. The resulting densely-sampled light fields have high visual quality which is beneficial in applications such as holographic stereograms, super-multiview displays, and light field compression

    Densely Sampled Light Field Reconstruction

    Get PDF
    The emerging light-field and holographic displays aim at providing an immersive visual experience, which in turn requires processing a substantial amount of visual information. In this endeavour, the concept of plenoptic or light-field function plays a very important role as it quantifies the light coming from a visual scene through the multitude of rays going in any direction, at any intensity and at any instant in time. Such a comprehensive function is multi-dimensional and highly redundant at the same time, which raises the problem of its accurate sampling and reconstruction. In this thesis, we develop a novel method for light field reconstruction from a limited number of multi-perspective images (views). First, we formalize the light field function in the epipolar image domain in terms of a directional frame representation. We construct a frame (i.e. a dictionary) based on the previously developed shearlet system. The constructed dictionary efficiently represents the structural properties of the continuous light field function. This allows us to formulate the light field reconstruction problem as a variational optimization problem with a sparsity constraint. Second, we develop an iterative optimization procedure by adapting the variational in-painting method originally developed for 2D image reconstruction. The designed algorithm employs an iterative thresholding and yields an accurate reconstruction using a relatively sparse set of samples in the angular domain. Finally, we extended the method using various acceleration approaches. More specifically, we improve its robustness by an additional overrelaxation step and make use of the redundancy between different color channels and between epipolar images through colorization and wavelet decomposition techniques. Extensive experiments have demonstrated that these methods constitute the state of the art for light field reconstruction. The resulting densely-sampled light fields have high visual quality which is beneficial in applications such as holographic stereograms, super-multiview displays, and light field compression

    Fast: Flow-Assisted Shearlet Transform for Densely-Sampled Light Field Reconstruction

    Get PDF
    Shearlet Transform (ST) is one of the most effective methods for Densely-Sampled Light Field (DSLF) reconstruction from a Sparsely-Sampled Light Field (SSLF). However, ST requires a precise disparity estimation of the SSLF. To this end, in this paper a state-of-the-art optical flow method, i.e. PWC-Net, is employed to estimate bidirectional disparity maps between neighboring views in the SSLF. Moreover, to take full advantage of optical flow and ST for DSLF reconstruction, a novel learning-based method, referred to as Flow-Assisted Shearlet Transform (FAST), is proposed in this paper. Specifically, FAST consists of two deep convolutional neural networks, i.e. disparity refinement network and view synthesis network, which fully leverage the disparity information to synthesize novel views via warping and blending and to improve the novel view synthesis performance of ST. Experimental results demonstrate the superiority of the proposed FAST method over the other state-of-the-art DSLF reconstruction methods on nine challenging real-world SSLF sub-datasets with large disparity ranges (up to 26 pixels).acceptedVersionPeer reviewe
    corecore