62,524 research outputs found
Evaluation of estimation approaches on the quality and robustness of collision warning system
Vehicle safety is one of the most challenging aspect of future-generation
autonomous and semi-autonomous vehicles. Collision warning systems (CCWs), as a
proposed solution framework, can be relied as the main structure to address the
issues in this area. In this framework, information plays a very important
role. Each vehicle has access to its own information immediately. However,
another vehicle information is available through a wireless communication. Data
loss is very common issue for such communication approach. As a consequence,
CCW would suffer from providing late or false detection awareness. Robust
estimation of lost data is of this paper interest which its goal is to
reconstruct or estimate lost network data from previous available or estimated
data as close to actual values as possible under different rate of lost. In
this paper, we will investigate and evaluate three different algorithms
including constant velocity, constant acceleration and Kalman estimator for
this purpose. We make a comparison between their performance which reveals the
ability of them in term of accuracy and robustness for estimation and
prediction based on previous samples which at the end affects the quality of
CCW in awareness generation
Recommended from our members
An evaluation framework for stereo-based driver assistance
This is the post-print version of the Article - Copyright @ 2012 Springer VerlagThe accuracy of stereo algorithms or optical flow methods is commonly assessed by comparing the results against the Middlebury
database. However, equivalent data for automotive or robotics applications
rarely exist as they are difficult to obtain. As our main contribution, we introduce an evaluation framework tailored for stereo-based driver assistance able to deliver excellent performance measures while
circumventing manual label effort. Within this framework one can combine several ways of ground-truthing, different comparison metrics, and use large image databases.
Using our framework we show examples on several types of ground truthing techniques: implicit ground truthing (e.g. sequence recorded without a crash occurred), robotic vehicles with high precision sensors, and to a small extent, manual labeling. To show the effectiveness of our evaluation framework we compare three different stereo algorithms on
pixel and object level. In more detail we evaluate an intermediate representation
called the Stixel World. Besides evaluating the accuracy of the Stixels, we investigate the completeness (equivalent to the detection rate) of the StixelWorld vs. the number of phantom Stixels. Among many findings, using this framework enables us to reduce the number of phantom Stixels by a factor of three compared to the base parametrization. This base parametrization has already been optimized by test driving vehicles for distances exceeding 10000 km
FuSSI-Net: Fusion of Spatio-temporal Skeletons for Intention Prediction Network
Pedestrian intention recognition is very important to develop robust and safe
autonomous driving (AD) and advanced driver assistance systems (ADAS)
functionalities for urban driving. In this work, we develop an end-to-end
pedestrian intention framework that performs well on day- and night- time
scenarios. Our framework relies on objection detection bounding boxes combined
with skeletal features of human pose. We study early, late, and combined (early
and late) fusion mechanisms to exploit the skeletal features and reduce false
positives as well to improve the intention prediction performance. The early
fusion mechanism results in AP of 0.89 and precision/recall of 0.79/0.89 for
pedestrian intention classification. Furthermore, we propose three new metrics
to properly evaluate the pedestrian intention systems. Under these new
evaluation metrics for the intention prediction, the proposed end-to-end
network offers accurate pedestrian intention up to half a second ahead of the
actual risky maneuver.Comment: 5 pages, 6 figures, 5 tables, IEEE Asilomar SS
Car that Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models
Advanced Driver Assistance Systems (ADAS) have made driving safer over the
last decade. They prepare vehicles for unsafe road conditions and alert drivers
if they perform a dangerous maneuver. However, many accidents are unavoidable
because by the time drivers are alerted, it is already too late. Anticipating
maneuvers beforehand can alert drivers before they perform the maneuver and
also give ADAS more time to avoid or prepare for the danger.
In this work we anticipate driving maneuvers a few seconds before they occur.
For this purpose we equip a car with cameras and a computing device to capture
the driving context from both inside and outside of the car. We propose an
Autoregressive Input-Output HMM to model the contextual information alongwith
the maneuvers. We evaluate our approach on a diverse data set with 1180 miles
of natural freeway and city driving and show that we can anticipate maneuvers
3.5 seconds before they occur with over 80\% F1-score in real-time.Comment: ICCV 2015, http://brain4cars.co
DeepAccident: A Motion and Accident Prediction Benchmark for V2X Autonomous Driving
Safety is the primary priority of autonomous driving. Nevertheless, no
published dataset currently supports the direct and explainable safety
evaluation for autonomous driving. In this work, we propose DeepAccident, a
large-scale dataset generated via a realistic simulator containing diverse
accident scenarios that frequently occur in real-world driving. The proposed
DeepAccident dataset contains 57K annotated frames and 285K annotated samples,
approximately 7 times more than the large-scale nuScenes dataset with 40k
annotated samples. In addition, we propose a new task, end-to-end motion and
accident prediction, based on the proposed dataset, which can be used to
directly evaluate the accident prediction ability for different autonomous
driving algorithms. Furthermore, for each scenario, we set four vehicles along
with one infrastructure to record data, thus providing diverse viewpoints for
accident scenarios and enabling V2X (vehicle-to-everything) research on
perception and prediction tasks. Finally, we present a baseline V2X model named
V2XFormer that demonstrates superior performance for motion and accident
prediction and 3D object detection compared to the single-vehicle model
Traffic Danger Recognition With Surveillance Cameras Without Training Data
We propose a traffic danger recognition model that works with arbitrary
traffic surveillance cameras to identify and predict car crashes. There are too
many cameras to monitor manually. Therefore, we developed a model to predict
and identify car crashes from surveillance cameras based on a 3D reconstruction
of the road plane and prediction of trajectories. For normal traffic, it
supports real-time proactive safety checks of speeds and distances between
vehicles to provide insights about possible high-risk areas. We achieve good
prediction and recognition of car crashes without using any labeled training
data of crashes. Experiments on the BrnoCompSpeed dataset show that our model
can accurately monitor the road, with mean errors of 1.80% for distance
measurement, 2.77 km/h for speed measurement, 0.24 m for car position
prediction, and 2.53 km/h for speed prediction.Comment: To be published in proceedings of Advanced Video and Signal-based
Surveillance (AVSS), 2018 15th IEEE International Conference on, pp. 378-383,
IEE
Implementation and Evaluation of a Cooperative Vehicle-to-Pedestrian Safety Application
While the development of Vehicle-to-Vehicle (V2V) safety applications based
on Dedicated Short-Range Communications (DSRC) has been extensively undergoing
standardization for more than a decade, such applications are extremely missing
for Vulnerable Road Users (VRUs). Nonexistence of collaborative systems between
VRUs and vehicles was the main reason for this lack of attention. Recent
developments in Wi-Fi Direct and DSRC-enabled smartphones are changing this
perspective. Leveraging the existing V2V platforms, we propose a new framework
using a DSRC-enabled smartphone to extend safety benefits to VRUs. The
interoperability of applications between vehicles and portable DSRC enabled
devices is achieved through the SAE J2735 Personal Safety Message (PSM).
However, considering the fact that VRU movement dynamics, response times, and
crash scenarios are fundamentally different from vehicles, a specific framework
should be designed for VRU safety applications to study their performance. In
this article, we first propose an end-to-end Vehicle-to-Pedestrian (V2P)
framework to provide situational awareness and hazard detection based on the
most common and injury-prone crash scenarios. The details of our VRU safety
module, including target classification and collision detection algorithms, are
explained next. Furthermore, we propose and evaluate a mitigating solution for
congestion and power consumption issues in such systems. Finally, the whole
system is implemented and analyzed for realistic crash scenarios
- ā¦