14,673 research outputs found

    Imitating Driver Behavior with Generative Adversarial Networks

    Full text link
    The ability to accurately predict and simulate human driving behavior is critical for the development of intelligent transportation systems. Traditional modeling methods have employed simple parametric models and behavioral cloning. This paper adopts a method for overcoming the problem of cascading errors inherent in prior approaches, resulting in realistic behavior that is robust to trajectory perturbations. We extend Generative Adversarial Imitation Learning to the training of recurrent policies, and we demonstrate that our model outperforms rule-based controllers and maximum likelihood models in realistic highway simulations. Our model both reproduces emergent behavior of human drivers, such as lane change rate, while maintaining realistic control over long time horizons.Comment: 8 pages, 6 figure

    A Learning-based Stochastic MPC Design for Cooperative Adaptive Cruise Control to Handle Interfering Vehicles

    Full text link
    Vehicle to Vehicle (V2V) communication has a great potential to improve reaction accuracy of different driver assistance systems in critical driving situations. Cooperative Adaptive Cruise Control (CACC), which is an automated application, provides drivers with extra benefits such as traffic throughput maximization and collision avoidance. CACC systems must be designed in a way that are sufficiently robust against all special maneuvers such as cutting-into the CACC platoons by interfering vehicles or hard braking by leading cars. To address this problem, a Neural- Network (NN)-based cut-in detection and trajectory prediction scheme is proposed in the first part of this paper. Next, a probabilistic framework is developed in which the cut-in probability is calculated based on the output of the mentioned cut-in prediction block. Finally, a specific Stochastic Model Predictive Controller (SMPC) is designed which incorporates this cut-in probability to enhance its reaction against the detected dangerous cut-in maneuver. The overall system is implemented and its performance is evaluated using realistic driving scenarios from Safety Pilot Model Deployment (SPMD).Comment: 10 pages, Submitted as a journal paper at T-I

    Driving with Style: Inverse Reinforcement Learning in General-Purpose Planning for Automated Driving

    Full text link
    Behavior and motion planning play an important role in automated driving. Traditionally, behavior planners instruct local motion planners with predefined behaviors. Due to the high scene complexity in urban environments, unpredictable situations may occur in which behavior planners fail to match predefined behavior templates. Recently, general-purpose planners have been introduced, combining behavior and local motion planning. These general-purpose planners allow behavior-aware motion planning given a single reward function. However, two challenges arise: First, this function has to map a complex feature space into rewards. Second, the reward function has to be manually tuned by an expert. Manually tuning this reward function becomes a tedious task. In this paper, we propose an approach that relies on human driving demonstrations to automatically tune reward functions. This study offers important insights into the driving style optimization of general-purpose planners with maximum entropy inverse reinforcement learning. We evaluate our approach based on the expected value difference between learned and demonstrated policies. Furthermore, we compare the similarity of human driven trajectories with optimal policies of our planner under learned and expert-tuned reward functions. Our experiments show that we are able to learn reward functions exceeding the level of manual expert tuning without prior domain knowledge.Comment: Appeared at IROS 2019. Accepted version. Added/updated footnote, minor correction in preliminarie
    corecore