Imitation learning is an approach for generating intelligent behavior when
the cost function is unknown or difficult to specify. Building upon work in
inverse reinforcement learning (IRL), Generative Adversarial Imitation Learning
(GAIL) aims to provide effective imitation even for problems with large or
continuous state and action spaces. Driver modeling is one example of a problem
where the state and action spaces are continuous. Human driving behavior is
characterized by non-linearity and stochasticity, and the underlying cost
function is unknown. As a result, learning from human driving demonstrations is
a promising approach for generating human-like driving behavior. This article
describes the use of GAIL for learning-based driver modeling. Because driver
modeling is inherently a multi-agent problem, where the interaction between
agents needs to be modeled, this paper describes a parameter-sharing extension
of GAIL called PS-GAIL to tackle multi-agent driver modeling. In addition, GAIL
is domain agnostic, making it difficult to encode specific knowledge relevant
to driving in the learning process. This paper describes Reward Augmented
Imitation Learning (RAIL), which modifies the reward signal to provide
domain-specific knowledge to the agent. Finally, human demonstrations are
dependent upon latent factors that may not be captured by GAIL. This paper
describes Burn-InfoGAIL, which allows for disentanglement of latent variability
in demonstrations. Imitation learning experiments are performed using NGSIM, a
real-world highway driving dataset. Experiments show that these modifications
to GAIL can successfully model highway driving behavior, accurately replicating
human demonstrations and generating realistic, emergent behavior in the traffic
flow arising from the interaction between driving agents.Comment: 28 pages, 8 figures. arXiv admin note: text overlap with
arXiv:1803.0104