29 research outputs found
3D Scene Modeling from Dense Video Light Fields
International audienceLight field imaging offers unprecedented opportunities for advanced scene analysis and modelling, with potential applications in various domains such as augmented reality, 3D robotics, and microscopy. This paper illustrates the potential of dense video light fields for 3D scene modeling. We first recall the principles of plenoptic cameras and present a downloadable test dataset captured with a Raytrix 2.0 plenop-tic camera. Methods to estimate the scene depth and to construct a 3D point cloud representation of the scene from the captured light field are then described.
Deep Depth From Focus
Depth from focus (DFF) is one of the classical ill-posed inverse problems in
computer vision. Most approaches recover the depth at each pixel based on the
focal setting which exhibits maximal sharpness. Yet, it is not obvious how to
reliably estimate the sharpness level, particularly in low-textured areas. In
this paper, we propose `Deep Depth From Focus (DDFF)' as the first end-to-end
learning approach to this problem. One of the main challenges we face is the
hunger for data of deep neural networks. In order to obtain a significant
amount of focal stacks with corresponding groundtruth depth, we propose to
leverage a light-field camera with a co-calibrated RGB-D sensor. This allows us
to digitally create focal stacks of varying sizes. Compared to existing
benchmarks our dataset is 25 times larger, enabling the use of machine learning
for this inverse problem. We compare our results with state-of-the-art DFF
methods and we also analyze the effect of several key deep architectural
components. These experiments show that our proposed method `DDFFNet' achieves
state-of-the-art performance in all scenes, reducing depth error by more than
75% compared to the classical DFF methods.Comment: accepted to Asian Conference on Computer Vision (ACCV) 201
Performance Metrics and Test Data Generation for Depth Estimation Algorithms
This thesis investigates performance metrics and test datasets used for the evaluation of depth estimation algorithms.
Stereo and light field algorithms take structured camera images as input to reconstruct a depth map of the depicted scene. Such depth estimation algorithms are employed in a multitude of practical applications such as industrial inspection and the movie industry. Recently, they have also been used for safety-relevant applications such as driver assistance and computer assisted surgery.
Despite this increasing practical relevance, depth estimation algorithms are still evaluated with simple error measures and on small academic datasets. To develop and select suitable and safe algorithms, it is essential to gain a thorough understanding of their respective strengths and weaknesses.
In this thesis, I demonstrate that computing average pixel errors of depth estimation algorithms is not sufficient for a thorough and reliable performance analysis. The analysis must also take into account the specific requirements of the given applications as well as the characteristics of the available test data.
I propose metrics to explicitly quantify depth estimation results at continuous surfaces, depth discontinuities, and fine structures. These geometric entities are particularly relevant for many applications and challenging for algorithms. In contrast to prevalent metrics, the proposed metrics take into account that pixels are neither spatially independent within an image nor uniformly challenging nor equally relevant.
Apart from performance metrics, test datasets play an important role for evaluation. Their availability is typically limited in quantity, quality, and diversity. I show how test data deficiencies can be overcome by using specific metrics, additional annotations, and stratified test data.
Using systematic test cases, a user study, and a comprehensive case study, I demonstrate that the proposed metrics, test datasets, and visualizations allow for a meaningful quantitative analysis of the strengths and weaknesses of different algorithms. In contrast to existing evaluation methodologies, application-specific priorities can be taken into account to identify the most suitable algorithms
Probabilistic-based Feature Embedding of 4-D Light Fields for Compressive Imaging and Denoising
The high-dimensional nature of the 4-D light field (LF) poses great
challenges in achieving efficient and effective feature embedding, that
severely impacts the performance of downstream tasks. To tackle this crucial
issue, in contrast to existing methods with empirically-designed architectures,
we propose a probabilistic-based feature embedding (PFE), which learns a
feature embedding architecture by assembling various low-dimensional
convolution patterns in a probability space for fully capturing spatial-angular
information. Building upon the proposed PFE, we then leverage the intrinsic
linear imaging model of the coded aperture camera to construct a
cycle-consistent 4-D LF reconstruction network from coded measurements.
Moreover, we incorporate PFE into an iterative optimization framework for 4-D
LF denoising. Our extensive experiments demonstrate the significant superiority
of our methods on both real-world and synthetic 4-D LF images, both
quantitatively and qualitatively, when compared with state-of-the-art methods.
The source code will be publicly available at
https://github.com/lyuxianqiang/LFCA-CR-NET