321 research outputs found

    Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions

    Get PDF
    This paper presents a data-driven approach for multi-robot coordination in partially-observable domains based on Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) and macro-actions (MAs). Dec-POMDPs provide a general framework for cooperative sequential decision making under uncertainty and MAs allow temporally extended and asynchronous action execution. To date, most methods assume the underlying Dec-POMDP model is known a priori or a full simulator is available during planning time. Previous methods which aim to address these issues suffer from local optimality and sensitivity to initial conditions. Additionally, few hardware demonstrations involving a large team of heterogeneous robots and with long planning horizons exist. This work addresses these gaps by proposing an iterative sampling based Expectation-Maximization algorithm (iSEM) to learn polices using only trajectory data containing observations, MAs, and rewards. Our experiments show the algorithm is able to achieve better solution quality than the state-of-the-art learning-based methods. We implement two variants of multi-robot Search and Rescue (SAR) domains (with and without obstacles) on hardware to demonstrate the learned policies can effectively control a team of distributed robots to cooperate in a partially observable stochastic environment.Comment: Accepted to the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017

    Task-driven multi-formation control for coordinated UAV/UGV ISR missions

    Get PDF
    The report describes the development of a theoretical framework for coordination and control of combined teams of UAVs and UGVs for coordinated ISR missions. We consider the mission as a composition of an ordered sequence of subtasks, each to be performed by a different team. We design continuous cooperative controllers that enable each team to perform a given subtask and we develop a discrete strategy for interleaving the action of teams on different subtasks. The overall multi-agent coordination architecture is captured by a hybrid automaton, stability is studied using Lyapunov tools, and performance is evaluated through numerical simulations

    Aerial-Ground collaborative sensing: Third-Person view for teleoperation

    Full text link
    Rapid deployment and operation are key requirements in time critical application, such as Search and Rescue (SaR). Efficiently teleoperated ground robots can support first-responders in such situations. However, first-person view teleoperation is sub-optimal in difficult terrains, while a third-person perspective can drastically increase teleoperation performance. Here, we propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide third-person perspective to ground robots. While our approach is based on local visual servoing, it further leverages the global localization of several ground robots to seamlessly transfer between these ground robots in GPS-denied environments. Therewith one MAV can support multiple ground robots on a demand basis. Furthermore, our system enables different visual detection regimes, and enhanced operability, and return-home functionality. We evaluate our system in real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR

    Coordinated motion of UGVs and a UAV

    Get PDF
    Coordination of autonomous mobile robots has received significant attention during the last two decades. Coordinated motion of heterogenous robot groups are more appealing due to the fact that unique advantages of different robots might be combined to increase the overall efficiency of the system. In this paper, a heterogeneous robot group composed of multiple Unmanned Ground Vehicles (UGVs) and an Unmanned Aerial Vehicle (UAV) collaborate in order to accomplish a predefined goal. UGVs follow a virtual leader which is defined as the projection of UAV’s position onto the horizontal plane. The UAV broadcasts its position at certain frequency. The position of the virtual leader and distances from the two closest neighbors are used to create linear and angular velocity references for each UGV. Several coordinated tasks have been presented and the results are verified by simulations where certain amount of communication delay between the vehicles is also considered. Results are quite promising

    Fault-tolerant formation driving mechanism designed for heterogeneous MAVs-UGVs groups

    Get PDF
    A fault-tolerant method for stabilization and navigation of 3D heterogeneous formations is proposed in this paper. The presented Model Predictive Control (MPC) based approach enables to deploy compact formations of closely cooperating autonomous aerial and ground robots in surveillance scenarios without the necessity of a precise external localization. Instead, the proposed method relies on a top-view visual relative localization provided by the micro aerial vehicles flying above the ground robots and on a simple yet stable visual based navigation using images from an onboard monocular camera. The MPC based schema together with a fault detection and recovery mechanism provide a robust solution applicable in complex environments with static and dynamic obstacles. The core of the proposed leader-follower based formation driving method consists in a representation of the entire 3D formation as a convex hull projected along a desired path that has to be followed by the group. Such an approach provides non-collision solution and respects requirements of the direct visibility between the team members. The uninterrupted visibility is crucial for the employed top-view localization and therefore for the stabilization of the group. The proposed formation driving method and the fault recovery mechanisms are verified by simulations and hardware experiments presented in the paper

    Epistemic Planning for Heterogeneous Robotic Systems

    Full text link
    In applications such as search and rescue or disaster relief, heterogeneous multi-robot systems (MRS) can provide significant advantages for complex objectives that require a suite of capabilities. However, within these application spaces, communication is often unreliable, causing inefficiencies or outright failures to arise in most MRS algorithms. Many researchers tackle this problem by requiring all robots to either maintain communication using proximity constraints or assuming that all robots will execute a predetermined plan over long periods of disconnection. The latter method allows for higher levels of efficiency in a MRS, but failures and environmental uncertainties can have cascading effects across the system, especially when a mission objective is complex or time-sensitive. To solve this, we propose an epistemic planning framework that allows robots to reason about the system state, leverage heterogeneous system makeups, and optimize information dissemination to disconnected neighbors. Dynamic epistemic logic formalizes the propagation of belief states, and epistemic task allocation and gossip is accomplished via a mixed integer program using the belief states for utility predictions and planning. The proposed framework is validated using simulations and experiments with heterogeneous vehicles

    Cooperative Air and Ground Survaillance

    Get PDF
    Unmanned aerial vehicles (UAVs) can be used to cover large areas searching for targets. However, sensors on UAVs are typically limited in their accuracy of localization of targets on the ground. On the other hand, unmanned ground vehicles (UGVs) can be deployed to accurately locate ground targets, but they have the disadvantage of not being able to move rapidly or see through such obstacles as buildings or fences. In this article, we describe how we can exploit this synergy by creating a seamless network of UAVs and UGVs. The keys to this are our framework and algorithms for search and localization, which are easily scalable to large numbers of UAVs and UGVs and are transparent to the specificity of individual platforms. We describe our experimental testbed, the framework and algorithms, and some results

    Navigation, localization and stabilization of formations of unmanned aerial and ground vehicles

    Get PDF
    A leader-follower formation driving algorithm developed for control of heterogeneous groups of unmanned micro aerial and ground vehicles stabilized under a top-view relative localization is presented in this paper. The core of the proposed method lies in a novel avoidance function, in which the entire 3D formation is represented by a convex hull projected along a desired path to be followed by the group. Such a representation of the formation provides non-collision trajectories of the robots and respects requirements of the direct visibility between the team members in environment with static as well as dynamic obstacles, which is crucial for the top-view localization. The algorithm is suited for utilization of a simple yet stable visual based navigation of the group (referred to as GeNav), which together with the on-board relative localization enables deployment of large teams of micro-scale robots in environments without any available global localization system. We formulate a novel Model Predictive Control (MPC) based concept that enables to respond to the changing environment and that provides a robust solution with team members' failure tolerance included. The performance of the proposed method is verified by numerical and hardware experiments inspired by reconnaissance and surveillance missions
    corecore