35,069 research outputs found
Deep Convolutional Neural Fields for Depth Estimation from a Single Image
We consider the problem of depth estimation from a single monocular image in
this work. It is a challenging task as no reliable depth cues are available,
e.g., stereo correspondences, motions, etc. Previous efforts have been focusing
on exploiting geometric priors or additional sources of information, with all
using hand-crafted features. Recently, there is mounting evidence that features
from deep convolutional neural networks (CNN) are setting new records for
various vision applications. On the other hand, considering the continuous
characteristic of the depth values, depth estimations can be naturally
formulated into a continuous conditional random field (CRF) learning problem.
Therefore, we in this paper present a deep convolutional neural field model for
estimating depths from a single image, aiming to jointly explore the capacity
of deep CNN and continuous CRF. Specifically, we propose a deep structured
learning scheme which learns the unary and pairwise potentials of continuous
CRF in a unified deep CNN framework.
The proposed method can be used for depth estimations of general scenes with
no geometric priors nor any extra information injected. In our case, the
integral of the partition function can be analytically calculated, thus we can
exactly solve the log-likelihood optimization. Moreover, solving the MAP
problem for predicting depths of a new image is highly efficient as closed-form
solutions exist. We experimentally demonstrate that the proposed method
outperforms state-of-the-art depth estimation methods on both indoor and
outdoor scene datasets.Comment: fixed some typos. in CVPR15 proceeding
Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs
Human visual system relies on both binocular stereo cues and monocular
focusness cues to gain effective 3D perception. In computer vision, the two
problems are traditionally solved in separate tracks. In this paper, we present
a unified learning-based technique that simultaneously uses both types of cues
for depth inference. Specifically, we use a pair of focal stacks as input to
emulate human perception. We first construct a comprehensive focal stack
training dataset synthesized by depth-guided light field rendering. We then
construct three individual networks: a Focus-Net to extract depth from a single
focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from
the focal stack, and a Stereo-Net to conduct stereo matching. We show how to
integrate them into a unified BDfF-Net to obtain high-quality depth maps.
Comprehensive experiments show that our approach outperforms the
state-of-the-art in both accuracy and speed and effectively emulates human
vision systems
Feature Mapping for Learning Fast and Accurate 3D Pose Inference from Synthetic Images
We propose a simple and efficient method for exploiting synthetic images when
training a Deep Network to predict a 3D pose from an image. The ability of
using synthetic images for training a Deep Network is extremely valuable as it
is easy to create a virtually infinite training set made of such images, while
capturing and annotating real images can be very cumbersome. However, synthetic
images do not resemble real images exactly, and using them for training can
result in suboptimal performance. It was recently shown that for exemplar-based
approaches, it is possible to learn a mapping from the exemplar representations
of real images to the exemplar representations of synthetic images. In this
paper, we show that this approach is more general, and that a network can also
be applied after the mapping to infer a 3D pose: At run time, given a real
image of the target object, we first compute the features for the image, map
them to the feature space of synthetic images, and finally use the resulting
features as input to another network which predicts the 3D pose. Since this
network can be trained very effectively by using synthetic images, it performs
very well in practice, and inference is faster and more accurate than with an
exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for
3D object pose estimation from color images, and the NYU dataset for 3D hand
pose estimation from depth maps. We show that it allows us to outperform the
state-of-the-art on both datasets.Comment: CVPR 201
- …