17,518 research outputs found
Learning Deep Object Detectors from 3D Models
Crowdsourced 3D CAD models are becoming easily accessible online, and can
potentially generate an infinite number of training images for almost any
object category.We show that augmenting the training data of contemporary Deep
Convolutional Neural Net (DCNN) models with such synthetic data can be
effective, especially when real training data is limited or not well matched to
the target domain. Most freely available CAD models capture 3D shape but are
often missing other low level cues, such as realistic object texture, pose, or
background. In a detailed analysis, we use synthetic CAD-rendered images to
probe the ability of DCNN to learn without these cues, with surprising
findings. In particular, we show that when the DCNN is fine-tuned on the target
detection task, it exhibits a large degree of invariance to missing low-level
cues, but, when pretrained on generic ImageNet classification, it learns better
when the low-level cues are simulated. We show that our synthetic DCNN training
approach significantly outperforms previous methods on the PASCAL VOC2007
dataset when learning in the few-shot scenario and improves performance in a
domain shift scenario on the Office benchmark
VConv-DAE: Deep Volumetric Shape Learning Without Object Labels
With the advent of affordable depth sensors, 3D capture becomes more and more
ubiquitous and already has made its way into commercial products. Yet,
capturing the geometry or complete shapes of everyday objects using scanning
devices (e.g. Kinect) still comes with several challenges that result in noise
or even incomplete shapes. Recent success in deep learning has shown how to
learn complex shape distributions in a data-driven way from large scale 3D CAD
Model collections and to utilize them for 3D processing on volumetric
representations and thereby circumventing problems of topology and
tessellation. Prior work has shown encouraging results on problems ranging from
shape completion to recognition. We provide an analysis of such approaches and
discover that training as well as the resulting representation are strongly and
unnecessarily tied to the notion of object labels. Thus, we propose a full
convolutional volumetric auto encoder that learns volumetric representation
from noisy data by estimating the voxel occupancy grids. The proposed method
outperforms prior work on challenging tasks like denoising and shape
completion. We also show that the obtained deep embedding gives competitive
performance when used for classification and promising results for shape
interpolation
3D ShapeNets: A Deep Representation for Volumetric Shapes
3D shape is a crucial but heavily underutilized cue in today's computer
vision systems, mostly due to the lack of a good generic shape representation.
With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft
Kinect), it is becoming increasingly important to have a powerful 3D shape
representation in the loop. Apart from category recognition, recovering full 3D
shapes from view-based 2.5D depth maps is also a critical part of visual
understanding. To this end, we propose to represent a geometric 3D shape as a
probability distribution of binary variables on a 3D voxel grid, using a
Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the
distribution of complex 3D shapes across different object categories and
arbitrary poses from raw CAD data, and discovers hierarchical compositional
part representations automatically. It naturally supports joint object
recognition and shape completion from 2.5D depth maps, and it enables active
object recognition through view planning. To train our 3D deep learning model,
we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive
experiments show that our 3D deep representation enables significant
performance improvement over the-state-of-the-arts in a variety of tasks.Comment: to be appeared in CVPR 201
A deep representation for depth images from synthetic data
Convolutional Neural Networks (CNNs) trained on large scale RGB databases
have become the secret sauce in the majority of recent approaches for object
categorization from RGB-D data. Thanks to colorization techniques, these
methods exploit the filters learned from 2D images to extract meaningful
representations in 2.5D. Still, the perceptual signature of these two kind of
images is very different, with the first usually strongly characterized by
textures, and the second mostly by silhouettes of objects. Ideally, one would
like to have two CNNs, one for RGB and one for depth, each trained on a
suitable data collection, able to capture the perceptual properties of each
channel for the task at hand. This has not been possible so far, due to the
lack of a suitable depth database. This paper addresses this issue, proposing
to opt for synthetically generated images rather than collecting by hand a 2.5D
large scale database. While being clearly a proxy for real data, synthetic
images allow to trade quality for quantity, making it possible to generate a
virtually infinite amount of data. We show that the filters learned from such
data collection, using the very same architecture typically used on visual
data, learns very different filters, resulting in depth features (a) able to
better characterize the different facets of depth images, and (b) complementary
with respect to those derived from CNNs pre-trained on 2D datasets. Experiments
on two publicly available databases show the power of our approach
Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views
This paper presents an end-to-end convolutional neural network (CNN) for
2D-3D exemplar detection. We demonstrate that the ability to adapt the features
of natural images to better align with those of CAD rendered views is critical
to the success of our technique. We show that the adaptation can be learned by
compositing rendered views of textured object models on natural images. Our
approach can be naturally incorporated into a CNN detection pipeline and
extends the accuracy and speed benefits from recent advances in deep learning
to 2D-3D exemplar detection. We applied our method to two tasks: instance
detection, where we evaluated on the IKEA dataset, and object category
detection, where we out-perform Aubry et al. for "chair" detection on a subset
of the Pascal VOC dataset.Comment: To appear in CVPR 201
- …