283 research outputs found
Downwash-Aware Trajectory Planning for Large Quadrotor Teams
We describe a method for formation-change trajectory planning for large
quadrotor teams in obstacle-rich environments. Our method decomposes the
planning problem into two stages: a discrete planner operating on a graph
representation of the workspace, and a continuous refinement that converts the
non-smooth graph plan into a set of C^k-continuous trajectories, locally
optimizing an integral-squared-derivative cost. We account for the downwash
effect, allowing safe flight in dense formations. We demonstrate the
computational efficiency in simulation with up to 200 robots and the physical
plausibility with an experiment with 32 nano-quadrotors. Our approach can
compute safe and smooth trajectories for hundreds of quadrotors in dense
environments with obstacles in a few minutes.Comment: 8 page
Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning
Finding feasible, collision-free paths for multiagent systems can be challenging, particularly in non-communicating scenarios where each agent's intent (e.g. goal) is unobservable to the others. In particular, finding time efficient paths often requires anticipating interaction with neighboring agents, the process of which can be computationally prohibitive. This work presents a decentralized multiagent collision avoidance algorithm based on a novel application of deep reinforcement learning, which effectively offloads the online computation (for predicting interaction patterns) to an offline learning procedure. Specifically, the proposed approach develops a value network that encodes the estimated time to the goal given an agent's joint configuration (positions and velocities) with its neighbors. Use of the value network not only admits efficient (i.e., real-time implementable) queries for finding a collision-free velocity vector, but also considers the uncertainty in the other agents' motion. Simulation results show more than 26% improvement in paths quality (i.e., time to reach the goal) when compared with optimal reciprocal collision avoidance (ORCA), a state-of-the-art collision avoidance strategy.Ford Motor Compan
Formation Flight in Dense Environments
Formation flight has a vast potential for aerial robot swarms in various
applications. However, existing methods lack the capability to achieve fully
autonomous large-scale formation flight in dense environments. To bridge the
gap, we present a complete formation flight system that effectively integrates
real-world constraints into aerial formation navigation. This paper proposes a
differentiable graph-based metric to quantify the overall similarity error
between formations. This metric is invariant to rotation, translation, and
scaling, providing more freedom for formation coordination. We design a
distributed trajectory optimization framework that considers formation
similarity, obstacle avoidance, and dynamic feasibility. The optimization is
decoupled to make large-scale formation flights computationally feasible. To
improve the elasticity of formation navigation in highly constrained scenes, we
present a swarm reorganization method which adaptively adjusts the formation
parameters and task assignments by generating local navigation goals. A novel
swarm agreement strategy called global-remap-local-replan and a formation-level
path planner is proposed in this work to coordinate the swarm global planning
and local trajectory optimizations efficiently. To validate the proposed
method, we design comprehensive benchmarks and simulations with other
cutting-edge works in terms of adaptability, predictability, elasticity,
resilience, and efficiency. Finally, integrated with palm-sized swarm platforms
with onboard computers and sensors, the proposed method demonstrates its
efficiency and robustness by achieving the largest scale formation flight in
dense outdoor environments.Comment: Submitted for IEEE Transactions on Robotic
Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning
Finding feasible, collision-free paths for multiagent systems can be challenging, particularly in non-communicating scenarios where each agent's intent (e.g. goal) is unobservable to the others. In particular, finding time efficient paths often requires anticipating interaction with neighboring agents, the process of which can be computationally prohibitive. This work presents a decentralized multiagent collision avoidance algorithm based on a novel application of deep reinforcement learning, which effectively offloads the online computation (for predicting interaction patterns) to an offline learning procedure. Specifically, the proposed approach develops a value network that encodes the estimated time to the goal given an agent's joint configuration (positions and velocities) with its neighbors. Use of the value network not only admits efficient (i.e., real-time implementable) queries for finding a collision-free velocity vector, but also considers the uncertainty in the other agents' motion. Simulation results show more than 26% improvement in paths quality (i.e., time to reach the goal) when compared with optimal reciprocal collision avoidance (ORCA), a state-of-the-art collision avoidance strategy.Ford Motor Compan
- …