26 research outputs found
Controlling Steering Angle for Cooperative Self-driving Vehicles utilizing CNN and LSTM-based Deep Networks
A fundamental challenge in autonomous vehicles is adjusting the steering
angle at different road conditions. Recent state-of-the-art solutions
addressing this challenge include deep learning techniques as they provide
end-to-end solution to predict steering angles directly from the raw input
images with higher accuracy. Most of these works ignore the temporal
dependencies between the image frames. In this paper, we tackle the problem of
utilizing multiple sets of images shared between two autonomous vehicles to
improve the accuracy of controlling the steering angle by considering the
temporal dependencies between the image frames. This problem has not been
studied in the literature widely. We present and study a new deep architecture
to predict the steering angle automatically by using Long-Short-Term-Memory
(LSTM) in our deep architecture. Our deep architecture is an end-to-end network
that utilizes CNN, LSTM and fully connected (FC) layers and it uses both
present and futures images (shared by a vehicle ahead via Vehicle-to-Vehicle
(V2V) communication) as input to control the steering angle. Our model
demonstrates the lowest error when compared to the other existing approaches in
the literature.Comment: Accepted in IV 2019, 6 pages, 9 figure
Learning to Predict Navigational Patterns from Partial Observations
Human beings cooperatively navigate rule-constrained environments by adhering
to mutually known navigational patterns, which may be represented as
directional pathways or road lanes. Inferring these navigational patterns from
incompletely observed environments is required for intelligent mobile robots
operating in unmapped locations. However, algorithmically defining these
navigational patterns is nontrivial. This paper presents the first
self-supervised learning (SSL) method for learning to infer navigational
patterns in real-world environments from partial observations only. We explain
how geometric data augmentation, predictive world modeling, and an
information-theoretic regularizer enables our model to predict an unbiased
local directional soft lane probability (DSLP) field in the limit of infinite
data. We demonstrate how to infer global navigational patterns by fitting a
maximum likelihood graph to the DSLP field. Experiments show that our SSL model
outperforms two SOTA supervised lane graph prediction models on the nuScenes
dataset. We propose our SSL method as a scalable and interpretable continual
learning paradigm for navigation by perception. Code released upon publication.Comment: Under revie