309 research outputs found
FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation
One of the most popular approaches to multi-target tracking is
tracking-by-detection. Current min-cost flow algorithms which solve the data
association problem optimally have three main drawbacks: they are
computationally expensive, they assume that the whole video is given as a
batch, and they scale badly in memory and computation with the length of the
video sequence. In this paper, we address each of these issues, resulting in a
computationally and memory-bounded solution. First, we introduce a dynamic
version of the successive shortest-path algorithm which solves the data
association problem optimally while reusing computation, resulting in
significantly faster inference than standard solvers. Second, we address the
optimal solution to the data association problem when dealing with an incoming
stream of data (i.e., online setting). Finally, we present our main
contribution which is an approximate online solution with bounded memory and
computation which is capable of handling videos of arbitrarily length while
performing tracking in real time. We demonstrate the effectiveness of our
algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art
performance, while being significantly faster than existing solvers
PIXOR: Real-time 3D Object Detection from Point Clouds
We address the problem of real-time 3D object detection from point clouds in
the context of autonomous driving. Computation speed is critical as detection
is a necessary component for safety. Existing approaches are, however,
expensive in computation due to high dimensionality of point clouds. We utilize
the 3D data more efficiently by representing the scene from the Bird's Eye View
(BEV), and propose PIXOR, a proposal-free, single-stage detector that outputs
oriented 3D object estimates decoded from pixel-wise neural network
predictions. The input representation, network architecture, and model
optimization are especially designed to balance high accuracy and real-time
efficiency. We validate PIXOR on two datasets: the KITTI BEV object detection
benchmark, and a large-scale 3D vehicle detection benchmark. In both datasets
we show that the proposed detector surpasses other state-of-the-art methods
notably in terms of Average Precision (AP), while still runs at >28 FPS.Comment: Update of CVPR2018 paper: correct timing, fix typos, add
acknowledgemen
DeepSignals: Predicting Intent of Drivers Through Visual Signals
Detecting the intention of drivers is an essential task in self-driving,
necessary to anticipate sudden events like lane changes and stops. Turn signals
and emergency flashers communicate such intentions, providing seconds of
potentially critical reaction time. In this paper, we propose to detect these
signals in video sequences by using a deep neural network that reasons about
both spatial and temporal information. Our experiments on more than a million
frames show high per-frame accuracy in very challenging scenarios.Comment: To be presented at the IEEE International Conference on Robotics and
Automation (ICRA), 201
segDeepM: Exploiting Segmentation and Context in Deep Neural Networks for Object Detection
In this paper, we propose an approach that exploits object segmentation in
order to improve the accuracy of object detection. We frame the problem as
inference in a Markov Random Field, in which each detection hypothesis scores
object appearance as well as contextual information using Convolutional Neural
Networks, and allows the hypothesis to choose and score a segment out of a
large pool of accurate object segmentation proposals. This enables the detector
to incorporate additional evidence when it is available and thus results in
more accurate detections. Our experiments show an improvement of 4.1% in mAP
over the R-CNN baseline on PASCAL VOC 2010, and 3.4% over the current
state-of-the-art, demonstrating the power of our approach
Annotating Object Instances with a Polygon-RNN
We propose an approach for semi-automatic annotation of object instances.
While most current methods treat object segmentation as a pixel-labeling
problem, we here cast it as a polygon prediction task, mimicking how most
current datasets have been annotated. In particular, our approach takes as
input an image crop and sequentially produces vertices of the polygon outlining
the object. This allows a human annotator to interfere at any time and correct
a vertex if needed, producing as accurate segmentation as desired by the
annotator. We show that our approach speeds up the annotation process by a
factor of 4.7 across all classes in Cityscapes, while achieving 78.4% agreement
in IoU with original ground-truth, matching the typical agreement between human
annotators. For cars, our speed-up factor is 7.3 for an agreement of 82.2%. We
further show generalization capabilities of our approach to unseen datasets
- …