153 research outputs found
Depth Estimation Through a Generative Model of Light Field Synthesis
Light field photography captures rich structural information that may
facilitate a number of traditional image processing and computer vision tasks.
A crucial ingredient in such endeavors is accurate depth recovery. We present a
novel framework that allows the recovery of a high quality continuous depth map
from light field data. To this end we propose a generative model of a light
field that is fully parametrized by its corresponding depth map. The model
allows for the integration of powerful regularization techniques such as a
non-local means prior, facilitating accurate depth map estimation.Comment: German Conference on Pattern Recognition (GCPR) 201
Deep Depth From Focus
Depth from focus (DFF) is one of the classical ill-posed inverse problems in
computer vision. Most approaches recover the depth at each pixel based on the
focal setting which exhibits maximal sharpness. Yet, it is not obvious how to
reliably estimate the sharpness level, particularly in low-textured areas. In
this paper, we propose `Deep Depth From Focus (DDFF)' as the first end-to-end
learning approach to this problem. One of the main challenges we face is the
hunger for data of deep neural networks. In order to obtain a significant
amount of focal stacks with corresponding groundtruth depth, we propose to
leverage a light-field camera with a co-calibrated RGB-D sensor. This allows us
to digitally create focal stacks of varying sizes. Compared to existing
benchmarks our dataset is 25 times larger, enabling the use of machine learning
for this inverse problem. We compare our results with state-of-the-art DFF
methods and we also analyze the effect of several key deep architectural
components. These experiments show that our proposed method `DDFFNet' achieves
state-of-the-art performance in all scenes, reducing depth error by more than
75% compared to the classical DFF methods.Comment: accepted to Asian Conference on Computer Vision (ACCV) 201
Efficient and Accurate Disparity Estimation from MLA-Based Plenoptic Cameras
This manuscript focuses on the processing images from microlens-array based plenoptic cameras. These cameras enable the capturing of the light field in a single shot, recording a greater amount of information with respect to conventional cameras, allowing to develop a whole new set of applications. However, the enhanced information introduces additional challenges and results in higher computational effort. For one, the image is composed of thousand of micro-lens images, making it an unusual case for standard image processing algorithms. Secondly, the disparity information has to be estimated from those micro-images to create a conventional image and a three-dimensional representation. Therefore, the work in thesis is devoted to analyse and propose methodologies to deal with plenoptic images. A full framework for plenoptic cameras has been built, including the contributions described in this thesis. A blur-aware calibration method to model a plenoptic camera, an optimization method to accurately select the best microlenses combination, an overview of the different types of plenoptic cameras and their representation. Datasets consisting of both real and synthetic images have been used to create a benchmark for different disparity estimation algorithm and to inspect the behaviour of disparity under different compression rates. A robust depth estimation approach has been developed for light field microscopy and image of biological samples
3D Face Reconstruction from Light Field Images: A Model-free Approach
Reconstructing 3D facial geometry from a single RGB image has recently
instigated wide research interest. However, it is still an ill-posed problem
and most methods rely on prior models hence undermining the accuracy of the
recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI)
obtained from light field cameras and learn CNN models that recover horizontal
and vertical 3D facial curves from the respective horizontal and vertical EPIs.
Our 3D face reconstruction network (FaceLFnet) comprises a densely connected
architecture to learn accurate 3D facial curves from low resolution EPIs. To
train the proposed FaceLFnets from scratch, we synthesize photo-realistic light
field images from 3D facial scans. The curve by curve 3D face estimation
approach allows the networks to learn from only 14K images of 80 identities,
which still comprises over 11 Million EPIs/curves. The estimated facial curves
are merged into a single pointcloud to which a surface is fitted to get the
final 3D face. Our method is model-free, requires only a few training samples
to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single
light field images under varying poses, expressions and lighting conditions.
Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces
reconstruction errors by over 20% compared to recent state of the art
Coherent multi-dimensional segmentation of multiview images using a variational framework and applications to image based rendering
Image Based Rendering (IBR) and in particular light field rendering has attracted a lot of
attention for interpolating new viewpoints from a set of multiview images. New images of
a scene are interpolated directly from nearby available ones, thus enabling a photorealistic
rendering. Sampling theory for light fields has shown that exact geometric information
in the scene is often unnecessary for rendering new views. Indeed, the band of the function
is approximately limited and new views can be rendered using classical interpolation
methods. However, IBR using undersampled light fields suffers from aliasing effects and
is difficult particularly when the scene has large depth variations and occlusions. In order
to deal with these cases, we study two approaches:
New sampling schemes have recently emerged that are able to perfectly reconstruct
certain classes of parametric signals that are not bandlimited but characterized by a finite
number of parameters. In this context, we derive novel sampling schemes for piecewise
sinusoidal and polynomial signals. In particular, we show that a piecewise sinusoidal signal
with arbitrarily high frequencies can be exactly recovered given certain conditions. These
results are applied to parametric multiview data that are not bandlimited.
We also focus on the problem of extracting regions (or layers) in multiview images
that can be individually rendered free of aliasing. The problem is posed in a multidimensional
variational framework using region competition. In extension to previous
methods, layers are considered as multi-dimensional hypervolumes. Therefore the segmentation
is done jointly over all the images and coherence is imposed throughout the
data. However, instead of propagating active hypersurfaces, we derive a semi-parametric
methodology that takes into account the constraints imposed by the camera setup and the
occlusion ordering. The resulting framework is a global multi-dimensional region competition that is consistent in all the images and efficiently handles occlusions. We show the
validity of the approach with captured light fields. Other special effects such as augmented
reality and disocclusion of hidden objects are also demonstrated
- …