3,208 research outputs found
DeepContext: Context-Encoding Neural Pathways for 3D Holistic Scene Understanding
While deep neural networks have led to human-level performance on computer
vision tasks, they have yet to demonstrate similar gains for holistic scene
understanding. In particular, 3D context has been shown to be an extremely
important cue for scene understanding - yet very little research has been done
on integrating context information with deep models. This paper presents an
approach to embed 3D context into the topology of a neural network trained to
perform holistic scene understanding. Given a depth image depicting a 3D scene,
our network aligns the observed scene with a predefined 3D scene template, and
then reasons about the existence and location of each object within the scene
template. In doing so, our model recognizes multiple objects in a single
forward pass of a 3D convolutional neural network, capturing both global scene
and local object information simultaneously. To create training data for this
3D network, we generate partly hallucinated depth images which are rendered by
replacing real objects with a repository of CAD models of the same object
category. Extensive experiments demonstrate the effectiveness of our algorithm
compared to the state-of-the-arts. Source code and data are available at
http://deepcontext.cs.princeton.edu.Comment: Accepted by ICCV201
Approximate Bayesian Image Interpretation using Generative Probabilistic Graphics Programs
The idea of computer vision as the Bayesian inverse problem to computer
graphics has a long history and an appealing elegance, but it has proved
difficult to directly implement. Instead, most vision tasks are approached via
complex bottom-up processing pipelines. Here we show that it is possible to
write short, simple probabilistic graphics programs that define flexible
generative models and to automatically invert them to interpret real-world
images. Generative probabilistic graphics programs consist of a stochastic
scene generator, a renderer based on graphics software, a stochastic likelihood
model linking the renderer's output and the data, and latent variables that
adjust the fidelity of the renderer and the tolerance of the likelihood model.
Representations and algorithms from computer graphics, originally designed to
produce high-quality images, are instead used as the deterministic backbone for
highly approximate and stochastic generative models. This formulation combines
probabilistic programming, computer graphics, and approximate Bayesian
computation, and depends only on general-purpose, automatic inference
techniques. We describe two applications: reading sequences of degraded and
adversarially obscured alphanumeric characters, and inferring 3D road models
from vehicle-mounted camera images. Each of the probabilistic graphics programs
we present relies on under 20 lines of probabilistic code, and supports
accurate, approximately Bayesian inferences about ambiguous real-world images.Comment: The first two authors contributed equally to this wor
- …