4,578 research outputs found

    Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving

    Full text link
    Tactical decision making for autonomous driving is challenging due to the diversity of environments, the uncertainty in the sensor information, and the complex interaction with other road users. This paper introduces a general framework for tactical decision making, which combines the concepts of planning and learning, in the form of Monte Carlo tree search and deep reinforcement learning. The method is based on the AlphaGo Zero algorithm, which is extended to a domain with a continuous state space where self-play cannot be used. The framework is applied to two different highway driving cases in a simulated environment and it is shown to perform better than a commonly used baseline method. The strength of combining planning and learning is also illustrated by a comparison to using the Monte Carlo tree search or the neural network policy separately

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    LimSim: A Long-term Interactive Multi-scenario Traffic Simulator

    Full text link
    With the growing popularity of digital twin and autonomous driving in transportation, the demand for simulation systems capable of generating high-fidelity and reliable scenarios is increasing. Existing simulation systems suffer from a lack of support for different types of scenarios, and the vehicle models used in these systems are too simplistic. Thus, such systems fail to represent driving styles and multi-vehicle interactions, and struggle to handle corner cases in the dataset. In this paper, we propose LimSim, the Long-term Interactive Multi-scenario traffic Simulator, which aims to provide a long-term continuous simulation capability under the urban road network. LimSim can simulate fine-grained dynamic scenarios and focus on the diverse interactions between multiple vehicles in the traffic flow. This paper provides a detailed introduction to the framework and features of the LimSim, and demonstrates its performance through case studies and experiments. LimSim is now open source on GitHub: https://www.github.com/PJLab-ADG/LimSim .Comment: Accepted by 26th IEEE International Conference on Intelligent Transportation Systems (ITSC 2023

    RITA: Boost Autonomous Driving Simulators with Realistic Interactive Traffic Flow

    Full text link
    High-quality traffic flow generation is the core module in building simulators for autonomous driving. However, the majority of available simulators are incapable of replicating traffic patterns that accurately reflect the various features of real-world data while also simulating human-like reactive responses to the tested autopilot driving strategies. Taking one step forward to addressing such a problem, we propose Realistic Interactive TrAffic flow (RITA) as an integrated component of existing driving simulators to provide high-quality traffic flow for the evaluation and optimization of the tested driving strategies. RITA is developed with consideration of three key features, i.e., fidelity, diversity, and controllability, and consists of two core modules called RITABackend and RITAKit. RITABackend is built to support vehicle-wise control and provide traffic generation models from real-world datasets, while RITAKit is developed with easy-to-use interfaces for controllable traffic generation via RITABackend. We demonstrate RITA's capacity to create diversified and high-fidelity traffic simulations in several highly interactive highway scenarios. The experimental findings demonstrate that our produced RITA traffic flows exhibit all three key features, hence enhancing the completeness of driving strategy evaluation. Moreover, we showcase the possibility for further improvement of baseline strategies through online fine-tuning with RITA traffic flows.Comment: 8 pages, 5 figures, 3 table

    Deep Reinforcement Learning and Game Theoretic Monte Carlo Decision Process for Safe and Efficient Lane Change Maneuver and Speed Management

    Get PDF
    Predicting the states of the surrounding traffic is one of the major problems in automated driving. Maneuvers such as lane change, merge, and exit management could pose challenges in the absence of intervehicular communication and can benefit from driver behavior prediction. Predicting the motion of surrounding vehicles and trajectory planning need to be computationally efficient for real-time implementation. This dissertation presents a decision process model for real-time automated lane change and speed management in highway and urban traffic. In lane change and merge maneuvers, it is important to know how neighboring vehicles will act in the imminent future. Human driver models, probabilistic approaches, rule-base techniques, and machine learning approach have addressed this problem only partially as they do not focus on the behavioral features of the vehicles. The main goal of this research is to develop a fast algorithm that predicts the future states of the neighboring vehicles, runs a fast decision process, and learns the regretfulness and rewardfulness of the executed decisions. The presented algorithm is developed based on level-K game theory to model and predict the interaction between the vehicles. Using deep reinforcement learning, this algorithm encodes and memorizes the past experiences that are recurrently used to reduce the computations and speed up motion planning. Also, we use Monte Carlo Tree Search (MCTS) as an effective tool that is employed nowadays for fast planning in complex and dynamic game environments. This development leverages the computation power efficiently and showcases promising outcomes for maneuver planning and predicting the environment’s dynamics. In the absence of traffic connectivity that may be due to either passenger’s choice of privacy or the vehicle’s lack of technology, this development can be extended and employed in automated vehicles for real-world and practical applications

    Optimal Weight Adaptation of Model Predictive Control for Connected and Automated Vehicles in Mixed Traffic with Bayesian Optimization

    Full text link
    In this paper, we develop an optimal weight adaptation strategy of model predictive control (MPC) for connected and automated vehicles (CAVs) in mixed traffic. We model the interaction between a CAV and a human-driven vehicle (HDV) as a simultaneous game and formulate a game-theoretic MPC problem to find a Nash equilibrium of the game. In the MPC problem, the weights in the HDV's objective function can be learned online using moving horizon inverse reinforcement learning. Using Bayesian optimization, we propose a strategy to optimally adapt the weights in the CAV's objective function so that the expected true cost when using MPC in simulations can be minimized. We validate the effectiveness of the optimal strategy by numerical simulations of a vehicle crossing example at an unsignalized intersection.Comment: accepted to ACC 202
    • …
    corecore