667 research outputs found

    Graph Convolutions For Teams Of Robots

    Get PDF
    In many applications in robotics, there exist teams of robots operating in dynamic environments requiring the design of complex communication and control schemes. The problem is made easier if one assumes the presence of an oracle that has instantaneous access to states of all entities in the environment and can communicate simultaneously without any loss. However, such an assumption is unrealistic especially when there exist a large number of robots. More specifically, we are interested in decentralized control policies for teams of robots using only local communication and sensory information to achieve high level team objectives. We first make the case for using distributed reinforcement learning to learn local behaviours by optimizing for a sparse team wide reward as opposed to existing model based methods. A central caveat of learning policies using model free reinforcement learning is the lack of scalability. To achieve large scale scalable results, we introduce a novel paradigm where the policies are parametrized by graph convolutions. Additionally, we also develop new methodologies to train these policies and derive technical insights into their behaviors. Building upon these, we design perception action loops for teams of robots that rely only on noisy visual sensors, a learned history state and local information from nearby robots to achieve complex team wide-objectives. We demonstrate the effectiveness of our methods on several large scale multi-robot tasks

    GLAS: Global-to-Local Safe Autonomy Synthesis for Multi-Robot Motion Planning with End-to-End Learning

    Get PDF
    We present GLAS: Global-to- Local Autonomy Synthesis, a provably-safe, automated distributed policy generation for multi-robot motion planning. Our approach combines the advantage of centralized planning of avoiding local minima with the advantage of decentralized controllers of scalability and distributed computation. In particular, our synthesized policies only require relative state information of nearby neighbors and obstacles, and compute a provably-safe action. Our approach has three major components: i) we generate demonstration trajectories using a global planner and extract local observations from them, ii) we use deep imitation learning to learn a decentralized policy that can run efficiently online, and iii) we introduce a novel differentiable safety module to ensure collision-free operation, thereby allowing for end-to-end policy training. Our numerical experiments demonstrate that our policies have a 20% higher success rate than optimal reciprocal collision avoidance, ORCA, across a wide range of robot and obstacle densities. We demonstrate our method on an aerial swarm, executing the policy on low-end microcontrollers in real-time

    Graph learning in robotics: a survey

    Full text link
    Deep neural networks for graphs have emerged as a powerful tool for learning on complex non-euclidean data, which is becoming increasingly common for a variety of different applications. Yet, although their potential has been widely recognised in the machine learning community, graph learning is largely unexplored for downstream tasks such as robotics applications. To fully unlock their potential, hence, we propose a review of graph neural architectures from a robotics perspective. The paper covers the fundamentals of graph-based models, including their architecture, training procedures, and applications. It also discusses recent advancements and challenges that arise in applied settings, related for example to the integration of perception, decision-making, and control. Finally, the paper provides an extensive review of various robotic applications that benefit from learning on graph structures, such as bodies and contacts modelling, robotic manipulation, action recognition, fleet motion planning, and many more. This survey aims to provide readers with a thorough understanding of the capabilities and limitations of graph neural architectures in robotics, and to highlight potential avenues for future research

    A Review of Deep Learning Methods and Applications for Unmanned Aerial Vehicles

    Get PDF
    Deep learning is recently showing outstanding results for solving a wide variety of robotic tasks in the areas of perception, planning, localization, and control. Its excellent capabilities for learning representations from the complex data acquired in real environments make it extremely suitable for many kinds of autonomous robotic applications. In parallel, Unmanned Aerial Vehicles (UAVs) are currently being extensively applied for several types of civilian tasks in applications going from security, surveillance, and disaster rescue to parcel delivery or warehouse management. In this paper, a thorough review has been performed on recent reported uses and applications of deep learning for UAVs, including the most relevant developments as well as their performances and limitations. In addition, a detailed explanation of the main deep learning techniques is provided. We conclude with a description of the main challenges for the application of deep learning for UAV-based solutions
    • …
    corecore