8 research outputs found
A Driver Behavior Modeling Structure Based on Non-parametric Bayesian Stochastic Hybrid Architecture
Heterogeneous nature of the vehicular networks, which results from the
co-existence of human-driven, semi-automated, and fully autonomous vehicles, is
a challenging phenomenon toward the realization of the intelligent
transportation systems with an acceptable level of safety, comfort, and
efficiency. Safety applications highly suffer from communication resource
limitations, specifically in dense and congested vehicular networks. The idea
of model-based communication (MBC) has been recently proposed to address this
issue. In this work, we propose Gaussian Process-based Stochastic Hybrid System
with Cumulative Relevant History (CRH-GP-SHS) framework, which is a
hierarchical stochastic hybrid modeling structure, built upon a non-parametric
Bayesian inference method, i.e. Gaussian processes. This framework is proposed
in order to be employed within the MBC context to jointly model driver/vehicle
behavior as a stochastic object. Non-parametric Bayesian methods relieve the
limitations imposed by non-evolutionary model structures and enable the
proposed framework to properly capture different stochastic behaviors. The
performance of the proposed CRH-GP-SHS framework at the inter-mode level has
been evaluated over a set of realistic lane change maneuvers from NGSIM-US101
dataset. The results show a noticeable performance improvement for GP in
comparison to the baseline constant speed model, specifically in critical
situations such as highly congested networks. Moreover, an augmented model has
also been proposed which is a composition of GP and constant speed models and
capable of capturing the driver behavior under various network reliability
conditions.Comment: This work has been accepted in 2018 IEEE Connected and Automated
Vehicles Symposium (CAVS 2018
Controlling Steering Angle for Cooperative Self-driving Vehicles utilizing CNN and LSTM-based Deep Networks
A fundamental challenge in autonomous vehicles is adjusting the steering
angle at different road conditions. Recent state-of-the-art solutions
addressing this challenge include deep learning techniques as they provide
end-to-end solution to predict steering angles directly from the raw input
images with higher accuracy. Most of these works ignore the temporal
dependencies between the image frames. In this paper, we tackle the problem of
utilizing multiple sets of images shared between two autonomous vehicles to
improve the accuracy of controlling the steering angle by considering the
temporal dependencies between the image frames. This problem has not been
studied in the literature widely. We present and study a new deep architecture
to predict the steering angle automatically by using Long-Short-Term-Memory
(LSTM) in our deep architecture. Our deep architecture is an end-to-end network
that utilizes CNN, LSTM and fully connected (FC) layers and it uses both
present and futures images (shared by a vehicle ahead via Vehicle-to-Vehicle
(V2V) communication) as input to control the steering angle. Our model
demonstrates the lowest error when compared to the other existing approaches in
the literature.Comment: Accepted in IV 2019, 6 pages, 9 figure