3,111 research outputs found
3D Object Class Detection in the Wild
Object class detection has been a synonym for 2D bounding box localization
for the longest time, fueled by the success of powerful statistical learning
techniques, combined with robust image representations. Only recently, there
has been a growing interest in revisiting the promise of computer vision from
the early days: to precisely delineate the contents of a visual scene, object
by object, in 3D. In this paper, we draw from recent advances in object
detection and 2D-3D object lifting in order to design an object class detector
that is particularly tailored towards 3D object class detection. Our 3D object
class detection method consists of several stages gradually enriching the
object detection output with object viewpoint, keypoints and 3D shape
estimates. Following careful design, in each stage it constantly improves the
performance and achieves state-ofthe-art performance in simultaneous 2D
bounding box and viewpoint estimation on the challenging Pascal3D+ dataset
A Causal And-Or Graph Model for Visibility Fluent Reasoning in Tracking Interacting Objects
Tracking humans that are interacting with the other subjects or environment
remains unsolved in visual tracking, because the visibility of the human of
interests in videos is unknown and might vary over time. In particular, it is
still difficult for state-of-the-art human trackers to recover complete human
trajectories in crowded scenes with frequent human interactions. In this work,
we consider the visibility status of a subject as a fluent variable, whose
change is mostly attributed to the subject's interaction with the surrounding,
e.g., crossing behind another object, entering a building, or getting into a
vehicle, etc. We introduce a Causal And-Or Graph (C-AOG) to represent the
causal-effect relations between an object's visibility fluent and its
activities, and develop a probabilistic graph model to jointly reason the
visibility fluent change (e.g., from visible to invisible) and track humans in
videos. We formulate this joint task as an iterative search of a feasible
causal graph structure that enables fast search algorithm, e.g., dynamic
programming method. We apply the proposed method on challenging video sequences
to evaluate its capabilities of estimating visibility fluent changes of
subjects and tracking subjects of interests over time. Results with comparisons
demonstrate that our method outperforms the alternative trackers and can
recover complete trajectories of humans in complicated scenarios with frequent
human interactions.Comment: accepted by CVPR 201
Multi-View Priors for Learning Detectors from Sparse Viewpoint Data
While the majority of today's object class models provide only 2D bounding
boxes, far richer output hypotheses are desirable including viewpoint,
fine-grained category, and 3D geometry estimate. However, models trained to
provide richer output require larger amounts of training data, preferably well
covering the relevant aspects such as viewpoint and fine-grained categories. In
this paper, we address this issue from the perspective of transfer learning,
and design an object class model that explicitly leverages correlations between
visual features. Specifically, our model represents prior distributions over
permissible multi-view detectors in a parametric way -- the priors are learned
once from training data of a source object class, and can later be used to
facilitate the learning of a detector for a target class. As we show in our
experiments, this transfer is not only beneficial for detectors based on
basic-level category representations, but also enables the robust learning of
detectors that represent classes at finer levels of granularity, where training
data is typically even scarcer and more unbalanced. As a result, we report
largely improved performance in simultaneous 2D object localization and
viewpoint estimation on a recent dataset of challenging street scenes.Comment: 13 pages, 7 figures, 4 tables, International Conference on Learning
Representations 201
Towards Scene Understanding with Detailed 3D Object Representations
Current approaches to semantic image and scene understanding typically employ
rather simple object representations such as 2D or 3D bounding boxes. While
such coarse models are robust and allow for reliable object detection, they
discard much of the information about objects' 3D shape and pose, and thus do
not lend themselves well to higher-level reasoning. Here, we propose to base
scene understanding on a high-resolution object representation. An object class
- in our case cars - is modeled as a deformable 3D wireframe, which enables
fine-grained modeling at the level of individual vertices and faces. We augment
that model to explicitly include vertex-level occlusion, and embed all
instances in a common coordinate frame, in order to infer and exploit
object-object interactions. Specifically, from a single view we jointly
estimate the shapes and poses of multiple objects in a common 3D frame. A
ground plane in that frame is estimated by consensus among different objects,
which significantly stabilizes monocular 3D pose estimation. The fine-grained
model, in conjunction with the explicit 3D scene model, further allows one to
infer part-level occlusions between the modeled objects, as well as occlusions
by other, unmodeled scene elements. To demonstrate the benefits of such
detailed object class models in the context of scene understanding we
systematically evaluate our approach on the challenging KITTI street scene
dataset. The experiments show that the model's ability to utilize image
evidence at the level of individual parts improves monocular 3D pose estimation
w.r.t. both location and (continuous) viewpoint.Comment: International Journal of Computer Vision (appeared online on 4
November 2014). Online version:
http://link.springer.com/article/10.1007/s11263-014-0780-
- …