9,063 research outputs found
Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications
We present an overview and evaluation of a new, systematic approach for
generation of highly realistic, annotated synthetic data for training of deep
neural networks in computer vision tasks. The main contribution is a procedural
world modeling approach enabling high variability coupled with physically
accurate image synthesis, and is a departure from the hand-modeled virtual
worlds and approximate image synthesis methods used in real-time applications.
The benefits of our approach include flexible, physically accurate and scalable
image synthesis, implicit wide coverage of classes and features, and complete
data introspection for annotations, which all contribute to quality and cost
efficiency. To evaluate our approach and the efficacy of the resulting data, we
use semantic segmentation for autonomous vehicles and robotic navigation as the
main application, and we train multiple deep learning architectures using
synthetic data with and without fine tuning on organic (i.e. real-world) data.
The evaluation shows that our approach improves the neural network's
performance and that even modest implementation efforts produce
state-of-the-art results.Comment: The project web page at
http://vcl.itn.liu.se/publications/2017/TKWU17/ contains a version of the
paper with high-resolution images as well as additional materia
Did You Miss the Sign? A False Negative Alarm System for Traffic Sign Detectors
Object detection is an integral part of an autonomous vehicle for its
safety-critical and navigational purposes. Traffic signs as objects play a
vital role in guiding such systems. However, if the vehicle fails to locate any
critical sign, it might make a catastrophic failure. In this paper, we propose
an approach to identify traffic signs that have been mistakenly discarded by
the object detector. The proposed method raises an alarm when it discovers a
failure by the object detector to detect a traffic sign. This approach can be
useful to evaluate the performance of the detector during the deployment phase.
We trained a single shot multi-box object detector to detect traffic signs and
used its internal features to train a separate false negative detector (FND).
During deployment, FND decides whether the traffic sign detector (TSD) has
missed a sign or not. We are using precision and recall to measure the accuracy
of FND in two different datasets. For 80% recall, FND has achieved 89.9%
precision in Belgium Traffic Sign Detection dataset and 90.8% precision in
German Traffic Sign Recognition Benchmark dataset respectively. To the best of
our knowledge, our method is the first to tackle this critical aspect of false
negative detection in robotic vision. Such a fail-safe mechanism for object
detection can improve the engagement of robotic vision systems in our daily
life.Comment: Submitted to the 2019 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2019
The Cityscapes Dataset for Semantic Urban Scene Understanding
Visual understanding of complex urban street scenes is an enabling factor for
a wide range of applications. Object detection has benefited enormously from
large-scale datasets, especially in the context of deep learning. For semantic
urban scene understanding, however, no current dataset adequately captures the
complexity of real-world urban scenes.
To address this, we introduce Cityscapes, a benchmark suite and large-scale
dataset to train and test approaches for pixel-level and instance-level
semantic labeling. Cityscapes is comprised of a large, diverse set of stereo
video sequences recorded in streets from 50 different cities. 5000 of these
images have high quality pixel-level annotations; 20000 additional images have
coarse annotations to enable methods that leverage large volumes of
weakly-labeled data. Crucially, our effort exceeds previous attempts in terms
of dataset size, annotation richness, scene variability, and complexity. Our
accompanying empirical study provides an in-depth analysis of the dataset
characteristics, as well as a performance evaluation of several
state-of-the-art approaches based on our benchmark.Comment: Includes supplemental materia
Training of Convolutional Networks on Multiple Heterogeneous Datasets for Street Scene Semantic Segmentation
We propose a convolutional network with hierarchical classifiers for
per-pixel semantic segmentation, which is able to be trained on multiple,
heterogeneous datasets and exploit their semantic hierarchy. Our network is the
first to be simultaneously trained on three different datasets from the
intelligent vehicles domain, i.e. Cityscapes, GTSDB and Mapillary Vistas, and
is able to handle different semantic level-of-detail, class imbalances, and
different annotation types, i.e. dense per-pixel and sparse bounding-box
labels. We assess our hierarchical approach, by comparing against flat,
non-hierarchical classifiers and we show improvements in mean pixel accuracy of
13.0% for Cityscapes classes and 2.4% for Vistas classes and 32.3% for GTSDB
classes. Our implementation achieves inference rates of 17 fps at a resolution
of 520x706 for 108 classes running on a GPU.Comment: IEEE Intelligent Vehicles 201
- …