6,420 research outputs found

    From Specifications to Behavior: Maneuver Verification in a Semantic State Space

    Full text link
    To realize a market entry of autonomous vehicles in the foreseeable future, the behavior planning system will need to abide by the same rules that humans follow. Product liability cannot be enforced without a proper solution to the approval trap. In this paper, we define a semantic abstraction of the continuous space and formalize traffic rules in linear temporal logic (LTL). Sequences in the semantic state space represent maneuvers a high-level planner could choose to execute. We check these maneuvers against the formalized traffic rules using runtime verification. By using the standard model checker NuSMV, we demonstrate the effectiveness of our approach and provide runtime properties for the maneuver verification. We show that high-level behavior can be verified in a semantic state space to fulfill a set of formalized rules, which could serve as a step towards safety of the intended functionality.Comment: Published at IEEE Intelligent Vehicles Symposium (IV), 201

    Safety-driven Interactive Planning for Neural Network-based Lane Changing

    Full text link
    Neural network-based driving planners have shown great promises in improving task performance of autonomous driving. However, it is critical and yet very challenging to ensure the safety of systems with neural network based components, especially in dense and highly interactive traffic environments. In this work, we propose a safety-driven interactive planning framework for neural network-based lane changing. To prevent over conservative planning, we identify the driving behavior of surrounding vehicles and assess their aggressiveness, and then adapt the planned trajectory for the ego vehicle accordingly in an interactive manner. The ego vehicle can proceed to change lanes if a safe evasion trajectory exists even in the predicted worst case; otherwise, it can stay around the current lateral position or return back to the original lane. We quantitatively demonstrate the effectiveness of our planner design and its advantage over baseline methods through extensive simulations with diverse and comprehensive experimental settings, as well as in real-world scenarios collected by an autonomous vehicle company

    Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving

    Full text link
    Tactical decision making for autonomous driving is challenging due to the diversity of environments, the uncertainty in the sensor information, and the complex interaction with other road users. This paper introduces a general framework for tactical decision making, which combines the concepts of planning and learning, in the form of Monte Carlo tree search and deep reinforcement learning. The method is based on the AlphaGo Zero algorithm, which is extended to a domain with a continuous state space where self-play cannot be used. The framework is applied to two different highway driving cases in a simulated environment and it is shown to perform better than a commonly used baseline method. The strength of combining planning and learning is also illustrated by a comparison to using the Monte Carlo tree search or the neural network policy separately
    • …
    corecore