24,928 research outputs found

    Multi-agent pathfinding for unmanned aerial vehicles

    Get PDF
    Unmanned aerial vehicles (UAVs), commonly known as drones, have become more and more prevalent in recent years. In particular, governmental organizations and companies around the world are starting to research how UAVs can be used to perform tasks such as package deliver, disaster investigation and surveillance of key assets such as pipelines, railroads and bridges. NASA is currently in the early stages of developing an air traffic control system specifically designed to manage UAV operations in low-altitude airspace. Companies such as Amazon and Rakuten are testing large-scale drone deliver services in the USA and Japan. To perform these tasks, safe and conflict-free routes for concurrently operating UAVs must be found. This can be done using multi-agent pathfinding (mapf) algorithms, although the correct choice of algorithms is not clear. This is because many state of the art mapf algorithms have only been tested in 2D space in maps with many obstacles, while UAVs operate in 3D space in open maps with few obstacles. In addition, when an unexpected event occurs in the airspace and UAVs are forced to deviate from their original routes while inflight, new conflict-free routes must be found. Planning for these unexpected events is commonly known as contingency planning. With manned aircraft, contingency plans can be created in advance or on a case-by-case basis while inflight. The scale at which UAVs operate, combined with the fact that unexpected events may occur anywhere at any time make both advanced planning and planning on a case-by-case basis impossible. Thus, a new approach is needed. Online multi-agent pathfinding (online mapf) looks to be a promising solution. Online mapf utilizes traditional mapf algorithms to perform path planning in real-time. That is, new routes for UAVs are found while inflight. The primary contribution of this thesis is to present one possible approach to UAV contingency planning using online multi-agent pathfinding algorithms, which can be used as a baseline for future research and development. It also provides an in-depth overview and analysis of offline mapf algorithms with the goal of determining which ones are likely to perform best when applied to UAVs. Finally, to further this same goal, a few different mapf algorithms are experimentally tested and analyzed

    Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder

    Full text link
    In this paper, we present a hierarchical path planning framework called SG-RL (subgoal graphs-reinforcement learning), to plan rational paths for agents maneuvering in continuous and uncertain environments. By "rational", we mean (1) efficient path planning to eliminate first-move lags; (2) collision-free and smooth for agents with kinematic constraints satisfied. SG-RL works in a two-level manner. At the first level, SG-RL uses a geometric path-planning method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract paths, also called subgoal sequences. At the second level, SG-RL uses an RL method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal motion-planning policies which can generate kinematically feasible and collision-free trajectories between adjacent subgoals. The first advantage of the proposed method is that SSG can solve the limitations of sparse reward and local minima trap for RL agents; thus, LSPI can be used to generate paths in complex environments. The second advantage is that, when the environment changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI can deal with uncertainties by exploiting its generalization ability to handle changes in environments. Simulation experiments in representative scenarios demonstrate that, compared with existing methods, SG-RL can work well on large-scale maps with relatively low action-switching frequencies and shorter path lengths, and SG-RL can deal with small changes in environments. We further demonstrate that the design of reward functions and the types of training environments are important factors for learning feasible policies.Comment: 20 page
    • …
    corecore