3,094 research outputs found
Deformable Part Models are Convolutional Neural Networks
Deformable part models (DPMs) and convolutional neural networks (CNNs) are
two widely used tools for visual recognition. They are typically viewed as
distinct approaches: DPMs are graphical models (Markov random fields), while
CNNs are "black-box" non-linear classifiers. In this paper, we show that a DPM
can be formulated as a CNN, thus providing a novel synthesis of the two ideas.
Our construction involves unrolling the DPM inference algorithm and mapping
each step to an equivalent (and at times novel) CNN layer. From this
perspective, it becomes natural to replace the standard image features used in
DPM with a learned feature extractor. We call the resulting model DeepPyramid
DPM and experimentally validate it on PASCAL VOC. DeepPyramid DPM significantly
outperforms DPMs based on histograms of oriented gradients features (HOG) and
slightly outperforms a comparable version of the recently introduced R-CNN
detection system, while running an order of magnitude faster
Fusion of Head and Full-Body Detectors for Multi-Object Tracking
In order to track all persons in a scene, the tracking-by-detection paradigm
has proven to be a very effective approach. Yet, relying solely on a single
detector is also a major limitation, as useful image information might be
ignored. Consequently, this work demonstrates how to fuse two detectors into a
tracking system. To obtain the trajectories, we propose to formulate tracking
as a weighted graph labeling problem, resulting in a binary quadratic program.
As such problems are NP-hard, the solution can only be approximated. Based on
the Frank-Wolfe algorithm, we present a new solver that is crucial to handle
such difficult problems. Evaluation on pedestrian tracking is provided for
multiple scenarios, showing superior results over single detector tracking and
standard QP-solvers. Finally, our tracker ranks 2nd on the MOT16 benchmark and
1st on the new MOT17 benchmark, outperforming over 90 trackers.Comment: 10 pages, 4 figures; Winner of the MOT17 challenge; CVPRW 201
- …