8,463 research outputs found
From Calibration to Large-Scale Structure from Motion with Light Fields
Classic pinhole cameras project the multi-dimensional information of the light flowing through a scene onto a single 2D snapshot. This projection limits the information that can be reconstructed from the 2D acquisition. Plenoptic (or light field) cameras, on the other hand, capture a 4D slice of the plenoptic function, termed the “light field”. These cameras provide both spatial and angular information on the light flowing through a scene; multiple views are captured in a single photographic exposure facilitating various applications. This thesis is concerned with the modelling of light field (or plenoptic) cameras and the development of structure from motion pipelines using such cameras. Specifically, we develop a geometric model for a multi-focus plenoptic camera, followed by a complete pipeline for the calibration of the suggested model. Given a calibrated light field camera, we then remap the captured light field to a grid of pinhole images. We use these images to obtain metric 3D reconstruction through a novel framework for structure from motion with light fields. Finally, we suggest a linear and efficient approach for absolute pose estimation for light fields
Baseline and triangulation geometry in a standard plenoptic camera
In this paper, we demonstrate light field triangulation to determine depth distances and baselines in a plenoptic camera. The advancement of micro lenses and image sensors enabled plenoptic cameras to capture a scene from different viewpoints with sufficient spatial resolution. While object distances can be inferred from disparities in a stereo viewpoint pair using triangulation, this concept remains ambiguous when applied in case of plenoptic cameras. We present a geometrical light field model allowing the triangulation to be applied to a plenoptic camera in order to predict object distances or to specify baselines as desired. It is shown that distance estimates from our novel method match those of real objects placed in front of the camera. Additional benchmark tests with an optical design software further validate the model’s accuracy with deviations of less than 0:33 % for several main lens types and focus settings. A variety of applications in the automotive and robotics field can benefit from this estimation model
Deep Depth From Focus
Depth from focus (DFF) is one of the classical ill-posed inverse problems in
computer vision. Most approaches recover the depth at each pixel based on the
focal setting which exhibits maximal sharpness. Yet, it is not obvious how to
reliably estimate the sharpness level, particularly in low-textured areas. In
this paper, we propose `Deep Depth From Focus (DDFF)' as the first end-to-end
learning approach to this problem. One of the main challenges we face is the
hunger for data of deep neural networks. In order to obtain a significant
amount of focal stacks with corresponding groundtruth depth, we propose to
leverage a light-field camera with a co-calibrated RGB-D sensor. This allows us
to digitally create focal stacks of varying sizes. Compared to existing
benchmarks our dataset is 25 times larger, enabling the use of machine learning
for this inverse problem. We compare our results with state-of-the-art DFF
methods and we also analyze the effect of several key deep architectural
components. These experiments show that our proposed method `DDFFNet' achieves
state-of-the-art performance in all scenes, reducing depth error by more than
75% compared to the classical DFF methods.Comment: accepted to Asian Conference on Computer Vision (ACCV) 201
Leveraging blur information for plenoptic camera calibration
This paper presents a novel calibration algorithm for plenoptic cameras,
especially the multi-focus configuration, where several types of micro-lenses
are used, using raw images only. Current calibration methods rely on simplified
projection models, use features from reconstructed images, or require separated
calibrations for each type of micro-lens. In the multi-focus configuration, the
same part of a scene will demonstrate different amounts of blur according to
the micro-lens focal length. Usually, only micro-images with the smallest
amount of blur are used. In order to exploit all available data, we propose to
explicitly model the defocus blur in a new camera model with the help of our
newly introduced Blur Aware Plenoptic (BAP) feature. First, it is used in a
pre-calibration step that retrieves initial camera parameters, and second, to
express a new cost function to be minimized in our single optimization process.
Third, it is exploited to calibrate the relative blur between micro-images. It
links the geometric blur, i.e., the blur circle, to the physical blur, i.e.,
the point spread function. Finally, we use the resulting blur profile to
characterize the camera's depth of field. Quantitative evaluations in
controlled environment on real-world data demonstrate the effectiveness of our
calibrations.Comment: arXiv admin note: text overlap with arXiv:2004.0774
- …