736 research outputs found
The suitability of lightfield camera depth maps for coordinate measurement applications
Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination,
allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic
camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning
conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics,
and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration
process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in
SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm
respectively. For the lateral X and Y coordinates, the accuracy was +1.56 m to −2.59 m and the repeatability
was 0.25 µm
Baseline and triangulation geometry in a standard plenoptic camera
In this paper, we demonstrate light field triangulation to determine depth distances and baselines in a plenoptic camera. The advancement of micro lenses and image sensors enabled plenoptic cameras to capture a scene from different viewpoints with sufficient spatial resolution. While object distances can be inferred from disparities in a stereo viewpoint pair using triangulation, this concept remains ambiguous when applied in case of plenoptic cameras. We present a geometrical light field model allowing the triangulation to be applied to a plenoptic camera in order to predict object distances or to specify baselines as desired. It is shown that distance estimates from our novel method match those of real objects placed in front of the camera. Additional benchmark tests with an optical design software further validate the model’s accuracy with deviations of less than 0:33 % for several main lens types and focus settings. A variety of applications in the automotive and robotics field can benefit from this estimation model
Deep Depth From Focus
Depth from focus (DFF) is one of the classical ill-posed inverse problems in
computer vision. Most approaches recover the depth at each pixel based on the
focal setting which exhibits maximal sharpness. Yet, it is not obvious how to
reliably estimate the sharpness level, particularly in low-textured areas. In
this paper, we propose `Deep Depth From Focus (DDFF)' as the first end-to-end
learning approach to this problem. One of the main challenges we face is the
hunger for data of deep neural networks. In order to obtain a significant
amount of focal stacks with corresponding groundtruth depth, we propose to
leverage a light-field camera with a co-calibrated RGB-D sensor. This allows us
to digitally create focal stacks of varying sizes. Compared to existing
benchmarks our dataset is 25 times larger, enabling the use of machine learning
for this inverse problem. We compare our results with state-of-the-art DFF
methods and we also analyze the effect of several key deep architectural
components. These experiments show that our proposed method `DDFFNet' achieves
state-of-the-art performance in all scenes, reducing depth error by more than
75% compared to the classical DFF methods.Comment: accepted to Asian Conference on Computer Vision (ACCV) 201
Exploring plenoptic properties of correlation imaging with chaotic light
In a setup illuminated by chaotic light, we consider different schemes that
enable to perform imaging by measuring second-order intensity correlations. The
most relevant feature of the proposed protocols is the ability to perform
plenoptic imaging, namely to reconstruct the geometrical path of light
propagating in the system, by imaging both the object and the focusing element.
This property allows to encode, in a single data acquisition, both
multi-perspective images of the scene and light distribution in different
planes between the scene and the focusing element. We unveil the plenoptic
property of three different setups, explore their refocusing potentialities and
discuss their practical applications.Comment: 9 pages, 4 figure
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
Light field imaging extends the traditional photography by capturing both
spatial and angular distribution of light, which enables new capabilities,
including post-capture refocusing, post-capture aperture control, and depth
estimation from a single shot. Micro-lens array (MLA) based light field cameras
offer a cost-effective approach to capture light field. A major drawback of MLA
based light field cameras is low spatial resolution, which is due to the fact
that a single image sensor is shared to capture both spatial and angular
information. In this paper, we present a learning based light field enhancement
approach. Both spatial and angular resolution of captured light field is
enhanced using convolutional neural networks. The proposed method is tested
with real light field data captured with a Lytro light field camera, clearly
demonstrating spatial and angular resolution improvement
Comparison of the Depth Accuracy of a Plenoptic Camera and a Stereo Camera System in Spatially Tracking Single Refuse-derived Fuel Particles in a Drop Shaft
With the development of depth cameras in the last decades, several cameras are able to acquire 3D information of the captured scenes, such as plenoptic camera and stereo camera system. Because of the differences in principle and construction of various depth cameras, different cameras own particular advantages and disadvantages. Therefore, a comprehensive and detailed comparison of different cameras is essential to select the right camera for the application. Our research compared the depth accuracy and stability of a stereo camera system and a plenoptic camera by monitoring the settling processes of various refuse-derived fuel particles in a drop shaft. The particles are detected at first using detection approaches, and the particle detections are subsequently associated in accordance with data association algorithms. The spatial particle trajectories are obtained by the tracking-by-detection approach, based on which the performances of the cameras are evaluated
- …