3,504 research outputs found
Controlling Steering Angle for Cooperative Self-driving Vehicles utilizing CNN and LSTM-based Deep Networks
A fundamental challenge in autonomous vehicles is adjusting the steering
angle at different road conditions. Recent state-of-the-art solutions
addressing this challenge include deep learning techniques as they provide
end-to-end solution to predict steering angles directly from the raw input
images with higher accuracy. Most of these works ignore the temporal
dependencies between the image frames. In this paper, we tackle the problem of
utilizing multiple sets of images shared between two autonomous vehicles to
improve the accuracy of controlling the steering angle by considering the
temporal dependencies between the image frames. This problem has not been
studied in the literature widely. We present and study a new deep architecture
to predict the steering angle automatically by using Long-Short-Term-Memory
(LSTM) in our deep architecture. Our deep architecture is an end-to-end network
that utilizes CNN, LSTM and fully connected (FC) layers and it uses both
present and futures images (shared by a vehicle ahead via Vehicle-to-Vehicle
(V2V) communication) as input to control the steering angle. Our model
demonstrates the lowest error when compared to the other existing approaches in
the literature.Comment: Accepted in IV 2019, 6 pages, 9 figure
Autonomous Vehicle Control: End-to-end Learning in Simulated Environments
This paper examines end-to-end learning for autonomous vehicles in diverse, simulated environments containing other vehicles, traffic lights, and traffic signs; in weather conditions ranging from sunny to heavy rain. The paper proposes an architecture combing a traditional Convolutional Neural Network with a recurrent layer to facilitate the learning of both spatial and temporal relationships. Furthermore, the paper suggests a model that supports navigational input from the user to facilitate the use of a global route planner to achieve a more comprehensive system. The paper also explores some of the uncertainties regarding the implementation of end-to-end systems. Specifically, how a system’s overall performance is affected by the size of the training dataset, the allowed prediction frequency, and the number of hidden states in the system’s recurrent module. The proposed system is trained using expert driving data captured in various simulated settings and evaluated by its real-time driving performance in unseen simulated environments. The results of the paper indicate that end-to-end systems can operate autonomously in simulated environments, in a range of different weather conditions. Additionally, it was found that using ten hidden states for the system’s recurrent module was optimal. The results further show that the system was sensitive to small reductions in dataset size and that a prediction frequency of 15 Hz was required for the system to perform at its full potential
Driver Distraction Identification with an Ensemble of Convolutional Neural Networks
The World Health Organization (WHO) reported 1.25 million deaths yearly due
to road traffic accidents worldwide and the number has been continuously
increasing over the last few years. Nearly fifth of these accidents are caused
by distracted drivers. Existing work of distracted driver detection is
concerned with a small set of distractions (mostly, cell phone usage).
Unreliable ad-hoc methods are often used.In this paper, we present the first
publicly available dataset for driver distraction identification with more
distraction postures than existing alternatives. In addition, we propose a
reliable deep learning-based solution that achieves a 90% accuracy. The system
consists of a genetically-weighted ensemble of convolutional neural networks,
we show that a weighted ensemble of classifiers using a genetic algorithm
yields in a better classification confidence. We also study the effect of
different visual elements in distraction detection by means of face and hand
localizations, and skin segmentation. Finally, we present a thinned version of
our ensemble that could achieve 84.64% classification accuracy and operate in a
real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949
Real-time End-to-End Federated Learning: An Automotive Case Study
With the development and the increasing interests in ML/DL fields, companies
are eager to utilize these methods to improve their service quality and user
experience. Federated Learning has been introduced as an efficient model
training approach to distribute and speed up time-consuming model training and
preserve user data privacy. However, common Federated Learning methods apply a
synchronized protocol to perform model aggregation, which turns out to be
inflexible and unable to adapt to rapidly evolving environments and
heterogeneous hardware settings in real-world systems. In this paper, we
introduce an approach to real-time end-to-end Federated Learning combined with
a novel asynchronous model aggregation protocol. We validate our approach in an
industrial use case in the automotive domain focusing on steering wheel angle
prediction for autonomous driving. Our results show that asynchronous Federated
Learning can significantly improve the prediction performance of local edge
models and reach the same accuracy level as the centralized machine learning
method. Moreover, the approach can reduce the communication overhead,
accelerate model training speed and consume real-time streaming data by
utilizing a sliding training window, which proves high efficiency when
deploying ML/DL components to heterogeneous real-world embedded systems
- …