432 research outputs found

    Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks

    Get PDF
    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement

    Variational Disparity Estimation Framework for Plenoptic Image

    Full text link
    This paper presents a computational framework for accurately estimating the disparity map of plenoptic images. The proposed framework is based on the variational principle and provides intrinsic sub-pixel precision. The light-field motion tensor introduced in the framework allows us to combine advanced robust data terms as well as provides explicit treatments for different color channels. A warping strategy is embedded in our framework for tackling the large displacement problem. We also show that by applying a simple regularization term and a guided median filtering, the accuracy of displacement field at occluded area could be greatly enhanced. We demonstrate the excellent performance of the proposed framework by intensive comparisons with the Lytro software and contemporary approaches on both synthetic and real-world datasets

    A Geometric Observer for Scene Reconstruction Using Plenoptic Cameras

    Get PDF
    This paper proposes an observer for generating depth maps of a scene from a sequence of measurements acquired by a two-plane light-field (plenoptic) camera. The observer is based on a gradient-descent methodology. The use of motion allows for estimation of depth maps where the scene contains insufficient texture for static estimation methods to work. A rigourous analysis of stability of the observer error is provided, and the observer is tested in simulation, demonstrating convergence behaviour.Comment: Full version of paper submitted to CDC 2018. 11 pages. 12 figure

    Deep Depth From Focus

    Full text link
    Depth from focus (DFF) is one of the classical ill-posed inverse problems in computer vision. Most approaches recover the depth at each pixel based on the focal setting which exhibits maximal sharpness. Yet, it is not obvious how to reliably estimate the sharpness level, particularly in low-textured areas. In this paper, we propose `Deep Depth From Focus (DDFF)' as the first end-to-end learning approach to this problem. One of the main challenges we face is the hunger for data of deep neural networks. In order to obtain a significant amount of focal stacks with corresponding groundtruth depth, we propose to leverage a light-field camera with a co-calibrated RGB-D sensor. This allows us to digitally create focal stacks of varying sizes. Compared to existing benchmarks our dataset is 25 times larger, enabling the use of machine learning for this inverse problem. We compare our results with state-of-the-art DFF methods and we also analyze the effect of several key deep architectural components. These experiments show that our proposed method `DDFFNet' achieves state-of-the-art performance in all scenes, reducing depth error by more than 75% compared to the classical DFF methods.Comment: accepted to Asian Conference on Computer Vision (ACCV) 201

    Geometric Inference with Microlens Arrays

    Get PDF
    This dissertation explores an alternative to traditional fiducial markers where geometric information is inferred from the observed position of 3D points seen in an image. We offer an alternative approach which enables geometric inference based on the relative orientation of markers in an image. We present markers fabricated from microlenses whose appearance changes depending on the marker\u27s orientation relative to the camera. First, we show how to manufacture and calibrate chromo-coding lenticular arrays to create a known relationship between the observed hue and orientation of the array. Second, we use 2 small chromo-coding lenticular arrays to estimate the pose of an object. Third, we use 3 large chromo-coding lenticular arrays to calibrate a camera with a single image. Finally, we create another type of fiducial marker from lenslet arrays that encode orientation with discrete black and white appearances. Collectively, these approaches oer new opportunities for pose estimation and camera calibration that are relevant for robotics, virtual reality, and augmented reality

    Depth and All-in-Focus Image Estimation in Synthetic Aperture Integral Imaging Under Partial Occlusions

    Get PDF
    A common assumption in the integral imaging reconstruction is that a pixel will be photo-consistent if all viewpoints observed by the different cameras converge at a single point when focusing at the proper depth. However, the presence of occlusions between objects in the scene prevents this from being fulfilled. In this paper, a novel depth and all-in focus image estimation method is presented, based on a photo-consistency measure that uses the median criterion in relation to the elemental images. The interest of this approach is to find a solution to detect which camera correctly sees the partially occluded object at a certain depth and allows for a precise solution to the object depth. In addition, a robust solution is proposed to detect the boundary limits between partially occluded objects, which are subsequently used during the regularization depth estimation process. The experimental results show that the proposed method outperforms other state-of-the-art depth estimation methods in a synthetic aperture integral imaging framework
    • …
    corecore