11,822 research outputs found
Fast Automatic Vehicle Annotation for Urban Traffic Surveillance
Automatic vehicle detection and annotation for streaming video data with complex scenes is an interesting but challenging task for intelligent transportation systems. In this paper, we present a fast algorithm: detection and annotation for vehicles (DAVE), which effectively combines vehicle detection and attributes annotation into a unified framework. DAVE consists of two convolutional neural networks: a shallow fully convolutional fast vehicle proposal network (FVPN) for extracting all vehicles' positions, and a deep attributes learning network (ALN), which aims to verify each detection candidate and infer each vehicle's pose, color, and type information simultaneously. These two nets are jointly optimized so that abundant latent knowledge learned from the deep empirical ALN can be exploited to guide training the much simpler FVPN. Once the system is trained, DAVE can achieve efficient vehicle detection and attributes annotation for real-world traffic surveillance data, while the FVPN can be independently adopted as a real-time high-performance vehicle detector as well. We evaluate the DAVE on a new self-collected urban traffic surveillance data set and the public PASCAL VOC2007 car and LISA 2010 data sets, with consistent improvements over existing algorithms
Impact of Ground Truth Annotation Quality on Performance of Semantic Image Segmentation of Traffic Conditions
Preparation of high-quality datasets for the urban scene understanding is a
labor-intensive task, especially, for datasets designed for the autonomous
driving applications. The application of the coarse ground truth (GT)
annotations of these datasets without detriment to the accuracy of semantic
image segmentation (by the mean intersection over union - mIoU) could simplify
and speedup the dataset preparation and model fine tuning before its practical
application. Here the results of the comparative analysis for semantic
segmentation accuracy obtained by PSPNet deep learning architecture are
presented for fine and coarse annotated images from Cityscapes dataset. Two
scenarios were investigated: scenario 1 - the fine GT images for training and
prediction, and scenario 2 - the fine GT images for training and the coarse GT
images for prediction. The obtained results demonstrated that for the most
important classes the mean accuracy values of semantic image segmentation for
coarse GT annotations are higher than for the fine GT ones, and the standard
deviation values are vice versa. It means that for some applications some
unimportant classes can be excluded and the model can be tuned further for some
classes and specific regions on the coarse GT dataset without loss of the
accuracy even. Moreover, this opens the perspectives to use deep neural
networks for the preparation of such coarse GT datasets.Comment: 10 pages, 6 figures, 2 tables, The Second International Conference on
Computer Science, Engineering and Education Applications (ICCSEEA2019) 26-27
January 2019, Kiev, Ukrain
Drive Video Analysis for the Detection of Traffic Near-Miss Incidents
Because of their recent introduction, self-driving cars and advanced driver
assistance system (ADAS) equipped vehicles have had little opportunity to
learn, the dangerous traffic (including near-miss incident) scenarios that
provide normal drivers with strong motivation to drive safely. Accordingly, as
a means of providing learning depth, this paper presents a novel traffic
database that contains information on a large number of traffic near-miss
incidents that were obtained by mounting driving recorders in more than 100
taxis over the course of a decade. The study makes the following two main
contributions: (i) In order to assist automated systems in detecting near-miss
incidents based on database instances, we created a large-scale traffic
near-miss incident database (NIDB) that consists of video clip of dangerous
events captured by monocular driving recorders. (ii) To illustrate the
applicability of NIDB traffic near-miss incidents, we provide two primary
database-related improvements: parameter fine-tuning using various near-miss
scenes from NIDB, and foreground/background separation into motion
representation. Then, using our new database in conjunction with a monocular
driving recorder, we developed a near-miss recognition method that provides
automated systems with a performance level that is comparable to a human-level
understanding of near-miss incidents (64.5% vs. 68.4% at near-miss recognition,
61.3% vs. 78.7% at near-miss detection).Comment: Accepted to ICRA 201
The Cityscapes Dataset for Semantic Urban Scene Understanding
Visual understanding of complex urban street scenes is an enabling factor for
a wide range of applications. Object detection has benefited enormously from
large-scale datasets, especially in the context of deep learning. For semantic
urban scene understanding, however, no current dataset adequately captures the
complexity of real-world urban scenes.
To address this, we introduce Cityscapes, a benchmark suite and large-scale
dataset to train and test approaches for pixel-level and instance-level
semantic labeling. Cityscapes is comprised of a large, diverse set of stereo
video sequences recorded in streets from 50 different cities. 5000 of these
images have high quality pixel-level annotations; 20000 additional images have
coarse annotations to enable methods that leverage large volumes of
weakly-labeled data. Crucially, our effort exceeds previous attempts in terms
of dataset size, annotation richness, scene variability, and complexity. Our
accompanying empirical study provides an in-depth analysis of the dataset
characteristics, as well as a performance evaluation of several
state-of-the-art approaches based on our benchmark.Comment: Includes supplemental materia
- …