4,258 research outputs found

    Position and Orientation Based Formation Control of Multiple Rigid Bodies with Collision Avoidance and Connectivity Maintenance

    Full text link
    This paper addresses the problem of position- and orientation-based formation control of a class of second-order nonlinear multi-agent systems in a 33D workspace with obstacles. More specifically, we design a decentralized control protocol such that each agent achieves a predefined geometric formation with its initial neighbors, while using local information based on a limited sensing radius. The latter implies that the proposed scheme guarantees that the initially connected agents remain always connected. In addition, by introducing certain distance constraints, we guarantee inter-agent collision avoidance as well as collision avoidance with the obstacles and the boundary of the workspace. The proposed controllers employ a novel class of potential functions and do not require a priori knowledge of the dynamical model, except for gravity-related terms. Finally, simulation results verify the validity of the proposed framework

    Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep Reinforcement Learning

    Full text link
    Developing a safe and efficient collision avoidance policy for multiple robots is challenging in the decentralized scenarios where each robot generate its paths without observing other robots' states and intents. While other distributed multi-robot collision avoidance systems exist, they often require extracting agent-level features to plan a local collision-free action, which can be computationally prohibitive and not robust. More importantly, in practice the performance of these methods are much lower than their centralized counterparts. We present a decentralized sensor-level collision avoidance policy for multi-robot systems, which directly maps raw sensor measurements to an agent's steering commands in terms of movement velocity. As a first step toward reducing the performance gap between decentralized and centralized methods, we present a multi-scenario multi-stage training framework to find an optimal policy which is trained over a large number of robots on rich, complex environments simultaneously using a policy gradient based reinforcement learning algorithm. We validate the learned sensor-level collision avoidance policy in a variety of simulated scenarios with thorough performance evaluations and show that the final learned policy is able to find time efficient, collision-free paths for a large-scale robot system. We also demonstrate that the learned policy can be well generalized to new scenarios that do not appear in the entire training period, including navigating a heterogeneous group of robots and a large-scale scenario with 100 robots. Videos are available at https://sites.google.com/view/drlmac
    • …
    corecore