113,145 research outputs found
Limited Visibility and Uncertainty Aware Motion Planning for Automated Driving
Adverse weather conditions and occlusions in urban environments result in
impaired perception. The uncertainties are handled in different modules of an
automated vehicle, ranging from sensor level over situation prediction until
motion planning. This paper focuses on motion planning given an uncertain
environment model with occlusions. We present a method to remain collision free
for the worst-case evolution of the given scene. We define criteria that
measure the available margins to a collision while considering visibility and
interactions, and consequently integrate conditions that apply these criteria
into an optimization-based motion planner. We show the generality of our method
by validating it in several distinct urban scenarios
Motion Planning in Urban Environments: Part I
We present the motion planning framework for an autonomous vehicle navigating through urban environments. Such environments present a number of motion planning challenges, including ultra-reliability, high-speed operation, complex inter-vehicle interaction, parking in large unstructured lots, and constrained maneuvers. Our approach combines a model-predictive trajectory generation algorithm for computing dynamically-feasible actions with two higher-level planners for generating long range plans in both on-road and unstructured areas of the environment. In this Part I of a two-part paper, we describe the underlying trajectory generator and the on-road planning component of this system. We provide examples and results from ldquoBossrdquo, an autonomous SUV that has driven itself over 3000 kilometers and competed in, and won, the Urban Challenge
Connected Autonomous Vehicle Motion Planning with Video Predictions from Smart, Self-Supervised Infrastructure
Connected autonomous vehicles (CAVs) promise to enhance safety, efficiency,
and sustainability in urban transportation. However, this is contingent upon a
CAV correctly predicting the motion of surrounding agents and planning its own
motion safely. Doing so is challenging in complex urban environments due to
frequent occlusions and interactions among many agents. One solution is to
leverage smart infrastructure to augment a CAV's situational awareness; the
present work leverages a recently proposed "Self-Supervised Traffic Advisor"
(SSTA) framework of smart sensors that teach themselves to generate and
broadcast useful video predictions of road users. In this work, SSTA
predictions are modified to predict future occupancy instead of raw video,
which reduces the data footprint of broadcast predictions. The resulting
predictions are used within a planning framework, demonstrating that this
design can effectively aid CAV motion planning. A variety of numerical
experiments study the key factors that make SSTA outputs useful for practical
CAV planning in crowded urban environments.Comment: 2023 IEEE 26th International Conference on Intelligent Transportation
Systems (ITSC
Driving with Style: Inverse Reinforcement Learning in General-Purpose Planning for Automated Driving
Behavior and motion planning play an important role in automated driving.
Traditionally, behavior planners instruct local motion planners with predefined
behaviors. Due to the high scene complexity in urban environments,
unpredictable situations may occur in which behavior planners fail to match
predefined behavior templates. Recently, general-purpose planners have been
introduced, combining behavior and local motion planning. These general-purpose
planners allow behavior-aware motion planning given a single reward function.
However, two challenges arise: First, this function has to map a complex
feature space into rewards. Second, the reward function has to be manually
tuned by an expert. Manually tuning this reward function becomes a tedious
task. In this paper, we propose an approach that relies on human driving
demonstrations to automatically tune reward functions. This study offers
important insights into the driving style optimization of general-purpose
planners with maximum entropy inverse reinforcement learning. We evaluate our
approach based on the expected value difference between learned and
demonstrated policies. Furthermore, we compare the similarity of human driven
trajectories with optimal policies of our planner under learned and
expert-tuned reward functions. Our experiments show that we are able to learn
reward functions exceeding the level of manual expert tuning without prior
domain knowledge.Comment: Appeared at IROS 2019. Accepted version. Added/updated footnote,
minor correction in preliminarie
Dynamic Body VSLAM with Semantic Constraints
Image based reconstruction of urban environments is a challenging problem
that deals with optimization of large number of variables, and has several
sources of errors like the presence of dynamic objects. Since most large scale
approaches make the assumption of observing static scenes, dynamic objects are
relegated to the noise modeling section of such systems. This is an approach of
convenience since the RANSAC based framework used to compute most multiview
geometric quantities for static scenes naturally confine dynamic objects to the
class of outlier measurements. However, reconstructing dynamic objects along
with the static environment helps us get a complete picture of an urban
environment. Such understanding can then be used for important robotic tasks
like path planning for autonomous navigation, obstacle tracking and avoidance,
and other areas. In this paper, we propose a system for robust SLAM that works
in both static and dynamic environments. To overcome the challenge of dynamic
objects in the scene, we propose a new model to incorporate semantic
constraints into the reconstruction algorithm. While some of these constraints
are based on multi-layered dense CRFs trained over appearance as well as motion
cues, other proposed constraints can be expressed as additional terms in the
bundle adjustment optimization process that does iterative refinement of 3D
structure and camera / object motion trajectories. We show results on the
challenging KITTI urban dataset for accuracy of motion segmentation and
reconstruction of the trajectory and shape of moving objects relative to ground
truth. We are able to show average relative error reduction by a significant
amount for moving object trajectory reconstruction relative to state-of-the-art
methods like VISO 2, as well as standard bundle adjustment algorithms
Interaction-Aware Sampling-Based MPC with Learned Local Goal Predictions
Motion planning for autonomous robots in tight, interaction-rich, and mixed
human-robot environments is challenging. State-of-the-art methods typically
separate prediction and planning, predicting other agents' trajectories first
and then planning the ego agent's motion in the remaining free space. However,
agents' lack of awareness of their influence on others can lead to the freezing
robot problem. We build upon Interaction-Aware Model Predictive Path Integral
(IA-MPPI) control and combine it with learning-based trajectory predictions,
thereby relaxing its reliance on communicated short-term goals for other
agents. We apply this framework to Autonomous Surface Vessels (ASVs) navigating
urban canals. By generating an artificial dataset in real sections of
Amsterdam's canals, adapting and training a prediction model for our domain,
and proposing heuristics to extract local goals, we enable effective
cooperation in planning. Our approach improves autonomous robot navigation in
complex, crowded environments, with potential implications for multi-agent
systems and human-robot interaction.Comment: Accepted for presentation at the 2023 IEEE International Symposium on
Multi-Robot & Multi-Agent System
Path Planning based on 2D Object Bounding-box
The implementation of Autonomous Driving (AD) technologies within urban
environments presents significant challenges. These challenges necessitate the
development of advanced perception systems and motion planning algorithms
capable of managing situations of considerable complexity. Although the
end-to-end AD method utilizing LiDAR sensors has achieved significant success
in this scenario, we argue that its drawbacks may hinder its practical
application. Instead, we propose the vision-centric AD as a promising
alternative offering a streamlined model without compromising performance. In
this study, we present a path planning method that utilizes 2D bounding boxes
of objects, developed through imitation learning in urban driving scenarios.
This is achieved by integrating high-definition (HD) map data with images
captured by surrounding cameras. Subsequent perception tasks involve
bounding-box detection and tracking, while the planning phase employs both
local embeddings via Graph Neural Network (GNN) and global embeddings via
Transformer for temporal-spatial feature aggregation, ultimately producing
optimal path planning information. We evaluated our model on the nuPlan
planning task and observed that it performs competitively in comparison to
existing vision-centric methods
- …