4,443 research outputs found
Reduced Memory Region Based Deep Convolutional Neural Network Detection
Accurate pedestrian detection has a primary role in automotive safety: for
example, by issuing warnings to the driver or acting actively on car's brakes,
it helps decreasing the probability of injuries and human fatalities. In order
to achieve very high accuracy, recent pedestrian detectors have been based on
Convolutional Neural Networks (CNN). Unfortunately, such approaches require
vast amounts of computational power and memory, preventing efficient
implementations on embedded systems. This work proposes a CNN-based detector,
adapting a general-purpose convolutional network to the task at hand. By
thoroughly analyzing and optimizing each step of the detection pipeline, we
develop an architecture that outperforms methods based on traditional image
features and achieves an accuracy close to the state-of-the-art while having
low computational complexity. Furthermore, the model is compressed in order to
fit the tight constrains of low power devices with a limited amount of embedded
memory available. This paper makes two main contributions: (1) it proves that a
region based deep neural network can be finely tuned to achieve adequate
accuracy for pedestrian detection (2) it achieves a very low memory usage
without reducing detection accuracy on the Caltech Pedestrian dataset.Comment: IEEE 2016 ICCE-Berli
Pedestrian Trajectory Prediction with Structured Memory Hierarchies
This paper presents a novel framework for human trajectory prediction based
on multimodal data (video and radar). Motivated by recent neuroscience
discoveries, we propose incorporating a structured memory component in the
human trajectory prediction pipeline to capture historical information to
improve performance. We introduce structured LSTM cells for modelling the
memory content hierarchically, preserving the spatiotemporal structure of the
information and enabling us to capture both short-term and long-term context.
We demonstrate how this architecture can be extended to integrate salient
information from multiple modalities to automatically store and retrieve
important information for decision making without any supervision. We evaluate
the effectiveness of the proposed models on a novel multimodal dataset that we
introduce, consisting of 40,000 pedestrian trajectories, acquired jointly from
a radar system and a CCTV camera system installed in a public place. The
performance is also evaluated on the publicly available New York Grand Central
pedestrian database. In both settings, the proposed models demonstrate their
capability to better anticipate future pedestrian motion compared to existing
state of the art.Comment: To appear in ECML-PKDD 201
The Cityscapes Dataset for Semantic Urban Scene Understanding
Visual understanding of complex urban street scenes is an enabling factor for
a wide range of applications. Object detection has benefited enormously from
large-scale datasets, especially in the context of deep learning. For semantic
urban scene understanding, however, no current dataset adequately captures the
complexity of real-world urban scenes.
To address this, we introduce Cityscapes, a benchmark suite and large-scale
dataset to train and test approaches for pixel-level and instance-level
semantic labeling. Cityscapes is comprised of a large, diverse set of stereo
video sequences recorded in streets from 50 different cities. 5000 of these
images have high quality pixel-level annotations; 20000 additional images have
coarse annotations to enable methods that leverage large volumes of
weakly-labeled data. Crucially, our effort exceeds previous attempts in terms
of dataset size, annotation richness, scene variability, and complexity. Our
accompanying empirical study provides an in-depth analysis of the dataset
characteristics, as well as a performance evaluation of several
state-of-the-art approaches based on our benchmark.Comment: Includes supplemental materia
- …