1,771 research outputs found
Imitating Driver Behavior with Generative Adversarial Networks
The ability to accurately predict and simulate human driving behavior is
critical for the development of intelligent transportation systems. Traditional
modeling methods have employed simple parametric models and behavioral cloning.
This paper adopts a method for overcoming the problem of cascading errors
inherent in prior approaches, resulting in realistic behavior that is robust to
trajectory perturbations. We extend Generative Adversarial Imitation Learning
to the training of recurrent policies, and we demonstrate that our model
outperforms rule-based controllers and maximum likelihood models in realistic
highway simulations. Our model both reproduces emergent behavior of human
drivers, such as lane change rate, while maintaining realistic control over
long time horizons.Comment: 8 pages, 6 figure
Modeling Human Driving Behavior through Generative Adversarial Imitation Learning
Imitation learning is an approach for generating intelligent behavior when
the cost function is unknown or difficult to specify. Building upon work in
inverse reinforcement learning (IRL), Generative Adversarial Imitation Learning
(GAIL) aims to provide effective imitation even for problems with large or
continuous state and action spaces. Driver modeling is one example of a problem
where the state and action spaces are continuous. Human driving behavior is
characterized by non-linearity and stochasticity, and the underlying cost
function is unknown. As a result, learning from human driving demonstrations is
a promising approach for generating human-like driving behavior. This article
describes the use of GAIL for learning-based driver modeling. Because driver
modeling is inherently a multi-agent problem, where the interaction between
agents needs to be modeled, this paper describes a parameter-sharing extension
of GAIL called PS-GAIL to tackle multi-agent driver modeling. In addition, GAIL
is domain agnostic, making it difficult to encode specific knowledge relevant
to driving in the learning process. This paper describes Reward Augmented
Imitation Learning (RAIL), which modifies the reward signal to provide
domain-specific knowledge to the agent. Finally, human demonstrations are
dependent upon latent factors that may not be captured by GAIL. This paper
describes Burn-InfoGAIL, which allows for disentanglement of latent variability
in demonstrations. Imitation learning experiments are performed using NGSIM, a
real-world highway driving dataset. Experiments show that these modifications
to GAIL can successfully model highway driving behavior, accurately replicating
human demonstrations and generating realistic, emergent behavior in the traffic
flow arising from the interaction between driving agents.Comment: 28 pages, 8 figures. arXiv admin note: text overlap with
arXiv:1803.0104
Resolving uncertainty on the fly: Modeling adaptive driving behavior as active inference
Understanding adaptive human driving behavior, in particular how drivers
manage uncertainty, is of key importance for developing simulated human driver
models that can be used in the evaluation and development of autonomous
vehicles. However, existing traffic psychology models of adaptive driving
behavior either lack computational rigor or only address specific scenarios
and/or behavioral phenomena. While models developed in the fields of machine
learning and robotics can effectively learn adaptive driving behavior from
data, due to their black box nature, they offer little or no explanation of the
mechanisms underlying the adaptive behavior. Thus, a generalizable,
interpretable, computational model of adaptive human driving behavior is still
lacking. This paper proposes such a model based on active inference, a
behavioral modeling framework originating in computational neuroscience. The
model offers a principled solution to how humans trade progress against caution
through policy selection based on the single mandate to minimize expected free
energy. This casts goal-seeking and information-seeking (uncertainty-resolving)
behavior under a single objective function, allowing the model to seamlessly
resolve uncertainty as a means to obtain its goals. We apply the model in two
apparently disparate driving scenarios that require managing uncertainty, (1)
driving past an occluding object and (2) visual time sharing between driving
and a secondary task, and show how human-like adaptive driving behavior emerges
from the single principle of expected free energy minimization.Comment: 33 pages, 13 figure
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Spatiotemporal Learning of Multivehicle Interaction Patterns in Lane-Change Scenarios
Interpretation of common-yet-challenging interaction scenarios can benefit
well-founded decisions for autonomous vehicles. Previous research achieved this
using their prior knowledge of specific scenarios with predefined models,
limiting their adaptive capabilities. This paper describes a Bayesian
nonparametric approach that leverages continuous (i.e., Gaussian processes) and
discrete (i.e., Dirichlet processes) stochastic processes to reveal underlying
interaction patterns of the ego vehicle with other nearby vehicles. Our model
relaxes dependency on the number of surrounding vehicles by developing an
acceleration-sensitive velocity field based on Gaussian processes. The
experiment results demonstrate that the velocity field can represent the
spatial interactions between the ego vehicle and its surroundings. Then, a
discrete Bayesian nonparametric model, integrating Dirichlet processes and
hidden Markov models, is developed to learn the interaction patterns over the
temporal space by segmenting and clustering the sequential interaction data
into interpretable granular patterns automatically. We then evaluate our
approach in the highway lane-change scenarios using the highD dataset collected
from real-world settings. Results demonstrate that our proposed Bayesian
nonparametric approach provides an insight into the complicated lane-change
interactions of the ego vehicle with multiple surrounding traffic participants
based on the interpretable interaction patterns and their transition properties
in temporal relationships. Our proposed approach sheds light on efficiently
analyzing other kinds of multi-agent interactions, such as vehicle-pedestrian
interactions. View the demos via https://youtu.be/z_vf9UHtdAM.Comment: for the supplements, see
https://chengyuan-zhang.github.io/Multivehicle-Interaction
Video Killed the HD-Map: Predicting Driving Behavior Directly From Drone Images
The development of algorithms that learn behavioral driving models using
human demonstrations has led to increasingly realistic simulations. In general,
such models learn to jointly predict trajectories for all controlled agents by
exploiting road context information such as drivable lanes obtained from
manually annotated high-definition (HD) maps. Recent studies show that these
models can greatly benefit from increasing the amount of human data available
for training. However, the manual annotation of HD maps which is necessary for
every new location puts a bottleneck on efficiently scaling up human traffic
datasets. We propose a drone birdview image-based map (DBM) representation that
requires minimal annotation and provides rich road context information. We
evaluate multi-agent trajectory prediction using the DBM by incorporating it
into a differentiable driving simulator as an image-texture-based
differentiable rendering module. Our results demonstrate competitive
multi-agent trajectory prediction performance when using our DBM representation
as compared to models trained with rasterized HD maps
An active inference model of car following: Advantages and applications
Driver process models play a central role in the testing, verification, and
development of automated and autonomous vehicle technologies. Prior models
developed from control theory and physics-based rules are limited in automated
vehicle applications due to their restricted behavioral repertoire. Data-driven
machine learning models are more capable than rule-based models but are limited
by the need for large training datasets and their lack of interpretability,
i.e., an understandable link between input data and output behaviors. We
propose a novel car following modeling approach using active inference, which
has comparable behavioral flexibility to data-driven models while maintaining
interpretability. We assessed the proposed model, the Active Inference Driving
Agent (AIDA), through a benchmark analysis against the rule-based Intelligent
Driver Model, and two neural network Behavior Cloning models. The models were
trained and tested on a real-world driving dataset using a consistent process.
The testing results showed that the AIDA predicted driving controls
significantly better than the rule-based Intelligent Driver Model and had
similar accuracy to the data-driven neural network models in three out of four
evaluations. Subsequent interpretability analyses illustrated that the AIDA's
learned distributions were consistent with driver behavior theory and that
visualizations of the distributions could be used to directly comprehend the
model's decision making process and correct model errors attributable to
limited training data. The results indicate that the AIDA is a promising
alternative to black-box data-driven models and suggest a need for further
research focused on modeling driving style and model training with more diverse
datasets
- …