47 research outputs found

    Depth Estimation Through a Generative Model of Light Field Synthesis

    Full text link
    Light field photography captures rich structural information that may facilitate a number of traditional image processing and computer vision tasks. A crucial ingredient in such endeavors is accurate depth recovery. We present a novel framework that allows the recovery of a high quality continuous depth map from light field data. To this end we propose a generative model of a light field that is fully parametrized by its corresponding depth map. The model allows for the integration of powerful regularization techniques such as a non-local means prior, facilitating accurate depth map estimation.Comment: German Conference on Pattern Recognition (GCPR) 201

    The Application of Preconditioned Alternating Direction Method of Multipliers in Depth from Focal Stack

    Get PDF
    Post capture refocusing effect in smartphone cameras is achievable by using focal stacks. However, the accuracy of this effect is totally dependent on the combination of the depth layers in the stack. The accuracy of the extended depth of field effect in this application can be improved significantly by computing an accurate depth map which has been an open issue for decades. To tackle this issue, in this paper, a framework is proposed based on Preconditioned Alternating Direction Method of Multipliers (PADMM) for depth from the focal stack and synthetic defocus application. In addition to its ability to provide high structural accuracy and occlusion handling, the optimization function of the proposed method can, in fact, converge faster and better than state of the art methods. The evaluation has been done on 21 sets of focal stacks and the optimization function has been compared against 5 other methods. Preliminary results indicate that the proposed method has a better performance in terms of structural accuracy and optimization in comparison to the current state of the art methods.Comment: 15 pages, 8 figure

    OccCasNet: Occlusion-aware Cascade Cost Volume for Light Field Depth Estimation

    Full text link
    Light field (LF) depth estimation is a crucial task with numerous practical applications. However, mainstream methods based on the multi-view stereo (MVS) are resource-intensive and time-consuming as they need to construct a finer cost volume. To address this issue and achieve a better trade-off between accuracy and efficiency, we propose an occlusion-aware cascade cost volume for LF depth (disparity) estimation. Our cascaded strategy reduces the sampling number while keeping the sampling interval constant during the construction of a finer cost volume. We also introduce occlusion maps to enhance accuracy in constructing the occlusion-aware cost volume. Specifically, we first obtain the coarse disparity map through the coarse disparity estimation network. Then, the sub-aperture images (SAIs) of side views are warped to the center view based on the initial disparity map. Next, we propose photo-consistency constraints between the warped SAIs and the center SAI to generate occlusion maps for each SAI. Finally, we introduce the coarse disparity map and occlusion maps to construct an occlusion-aware refined cost volume, enabling the refined disparity estimation network to yield a more precise disparity map. Extensive experiments demonstrate the effectiveness of our method. Compared with state-of-the-art methods, our method achieves a superior balance between accuracy and efficiency and ranks first in terms of MSE and Q25 metrics among published methods on the HCI 4D benchmark. The code and model of the proposed method are available at https://github.com/chaowentao/OccCasNet

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Robust 3D Surface Reconstruction from Light Fields

    Get PDF
    Light field data captures the intensity, as well as the direction of rays in 3D space, allowing to retrieve not only the 3D geometry information, but also the reflectance properties of the acquired scene. The main focus of this thesis is precise 3D geometry reconstruction from light fields, especially on scenes with specular objects. A new semi-global approach for 3D reconstruction from linear light fields is proposed. This method combines a modified version of the Progressive Probabilistic Hough Transform with local slope estimates to extract ori- entations, and consequently depth information, in epipolar plane images (EPIs). The resulting reconstructions achieve a higher accuracy than local methods, with a more precise localization of object boundaries, as well as preservation of fine details. In the second part of the thesis the proposed approach is extended to cir- cular light fields in order to determine the full 360° view of target objects. Additionally, circular light fields allow retrieving depth even from datasets acquired with telecentric lenses, a task which is not possible using a linearly moving camera. Experimental results on synthetic and real datasets demon- strate the quality and the robustness of the proposed algorithm, which pro- vides precise reconstructions even with highly specular objects. The quality of the final reconstruction opens up many possible application scenarios, such as precise 3D reconstruction for defect detection in industrial optical inspection, object scanning for heritage preservation, as well as depth segmentation for the movie industry

    Light-Field Imaging and Heterogeneous Light Fields

    Get PDF
    In traditional light-field analysis, images have matched spectral content which leads to constant intensity on epipolar plane image (EPI) manifolds. This kind of light field is termed homogeneous light field}. Heterogeneous light fields differ in that contributing images may have varying properties such as exposure selected or color filter applied. To be able to process heterogeneous light fields it is necessary to develop a computation method able to estimate orientations in heterogeneous EPI respectively. One alternative method to estimate orientation is the singular value decomposition. This analysis has resulted in new concepts for improving the structure tensor approach and yielded increased accuracy and greater applicability through exploitation of heterogeneous light fields.While the current structure tensor only estimates orientation with constant pixel intensity along the direction of orientation, the newly designed structure tensor is able to estimate orientations under changing intensity. Additionally, this improved structure tensor makes it possible to process acquired light fields with a higher reliability due to robustness against illumination changes. In order to use this improved structure tensor approach, it is important to design the light-field camera setup that the target scene covers the ±45° orientation range perfectly. This requirement leads directly to a relationship between camera setup for light-field capture and the frustum-shaped volume of interest. We show that higher-precision depth maps are achievable, which has a positive impact on the reliability of subsequent processing methods, especially for sRGB color reconstruction in color-filtered light fields. Aside this, a global shifting process is designed to overcome the basic range limitation of ±45° to estimate larger distances and to increase additionally the achievable precision in light-field processing. That enables the possibility to research spherical light fields, since the orientation range of spherical light fields typically overcomes the ±45° limit. Research in spherically acquired light fields has been conducted in collaboration with the German Center for Artificial Intelligence (DFKI) in Kaiserslautern
    corecore