15 research outputs found
Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems
Predicting the future location of vehicles is essential for safety-critical
applications such as advanced driver assistance systems (ADAS) and autonomous
driving. This paper introduces a novel approach to simultaneously predict both
the location and scale of target vehicles in the first-person (egocentric) view
of an ego-vehicle. We present a multi-stream recurrent neural network (RNN)
encoder-decoder model that separately captures both object location and scale
and pixel-level observations for future vehicle localization. We show that
incorporating dense optical flow improves prediction results significantly
since it captures information about motion as well as appearance change. We
also find that explicitly modeling future motion of the ego-vehicle improves
the prediction accuracy, which could be especially beneficial in intelligent
and automated vehicles that have motion planning capability. To evaluate the
performance of our approach, we present a new dataset of first-person videos
collected from a variety of scenarios at road intersections, which are
particularly challenging moments for prediction because vehicle trajectories
are diverse and dynamic.Comment: To appear on ICRA 201
Unsupervised Traffic Accident Detection in First-Person Videos
Recognizing abnormal events such as traffic violations and accidents in
natural driving scenes is essential for successful autonomous driving and
advanced driver assistance systems. However, most work on video anomaly
detection suffers from two crucial drawbacks. First, they assume cameras are
fixed and videos have static backgrounds, which is reasonable for surveillance
applications but not for vehicle-mounted cameras. Second, they pose the problem
as one-class classification, relying on arduously hand-labeled training
datasets that limit recognition to anomaly categories that have been explicitly
trained. This paper proposes an unsupervised approach for traffic accident
detection in first-person (dashboard-mounted camera) videos. Our major novelty
is to detect anomalies by predicting the future locations of traffic
participants and then monitoring the prediction accuracy and consistency
metrics with three different strategies. We evaluate our approach using a new
dataset of diverse traffic accidents, AnAn Accident Detection (A3D), as well as
another publicly-available dataset. Experimental results show that our approach
outperforms the state-of-the-art.Comment: Accepted to IROS 201