2,454 research outputs found
Dynamic Motion Planning for Aerial Surveillance on a Fixed-Wing UAV
We present an efficient path planning algorithm for an Unmanned Aerial
Vehicle surveying a cluttered urban landscape. A special emphasis is on
maximizing area surveyed while adhering to constraints of the UAV and partially
known and updating environment. A Voronoi bias is introduced in the
probabilistic roadmap building phase to identify certain critical milestones
for maximal surveillance of the search space. A kinematically feasible but
coarse tour connecting these milestones is generated by the global path
planner. A local path planner then generates smooth motion primitives between
consecutive nodes of the global path based on UAV as a Dubins vehicle and
taking into account any impending obstacles. A Markov Decision Process (MDP)
models the control policy for the UAV and determines the optimal action to be
undertaken for evading the obstacles in the vicinity with minimal deviation
from current path. The efficacy of the proposed algorithm is evaluated in an
updating simulation environment with dynamic and static obstacles.Comment: Accepted at International Conference on Unmanned Aircraft Systems
201
Obstacle-aware Adaptive Informative Path Planning for UAV-based Target Search
Target search with unmanned aerial vehicles (UAVs) is relevant problem to
many scenarios, e.g., search and rescue (SaR). However, a key challenge is
planning paths for maximal search efficiency given flight time constraints. To
address this, we propose the Obstacle-aware Adaptive Informative Path Planning
(OA-IPP) algorithm for target search in cluttered environments using UAVs. Our
approach leverages a layered planning strategy using a Gaussian Process
(GP)-based model of target occupancy to generate informative paths in
continuous 3D space. Within this framework, we introduce an adaptive replanning
scheme which allows us to trade off between information gain, field coverage,
sensor performance, and collision avoidance for efficient target detection.
Extensive simulations show that our OA-IPP method performs better than
state-of-the-art planners, and we demonstrate its application in a realistic
urban SaR scenario.Comment: Paper accepted for International Conference on Robotics and
Automation (ICRA-2019) to be held at Montreal, Canad
PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning
We present PRM-RL, a hierarchical method for long-range navigation task
completion that combines sampling based path planning with reinforcement
learning (RL). The RL agents learn short-range, point-to-point navigation
policies that capture robot dynamics and task constraints without knowledge of
the large-scale topology. Next, the sampling-based planners provide roadmaps
which connect robot configurations that can be successfully navigated by the RL
agent. The same RL agents are used to control the robot under the direction of
the planning, enabling long-range navigation. We use the Probabilistic Roadmaps
(PRMs) for the sampling-based planner. The RL agents are constructed using
feature-based and deep neural net policies in continuous state and action
spaces. We evaluate PRM-RL, both in simulation and on-robot, on two navigation
tasks with non-trivial robot dynamics: end-to-end differential drive indoor
navigation in office environments, and aerial cargo delivery in urban
environments with load displacement constraints. Our results show improvement
in task completion over both RL agents on their own and traditional
sampling-based planners. In the indoor navigation task, PRM-RL successfully
completes up to 215 m long trajectories under noisy sensor conditions, and the
aerial cargo delivery completes flights over 1000 m without violating the task
constraints in an environment 63 million times larger than used in training.Comment: 9 pages, 7 figure
- …