72,770 research outputs found

    Multi-agent Time-based Decision-making for the Search and Action Problem

    Full text link
    Many robotic applications, such as search-and-rescue, require multiple agents to search for and perform actions on targets. However, such missions present several challenges, including cooperative exploration, task selection and allocation, time limitations, and computational complexity. To address this, we propose a decentralized multi-agent decision-making framework for the search and action problem with time constraints. The main idea is to treat time as an allocated budget in a setting where each agent action incurs a time cost and yields a certain reward. Our approach leverages probabilistic reasoning to make near-optimal decisions leading to maximized reward. We evaluate our method in the search, pick, and place scenario of the Mohamed Bin Zayed International Robotics Challenge (MBZIRC), by using a probability density map and reward prediction function to assess actions. Extensive simulations show that our algorithm outperforms benchmark strategies, and we demonstrate system integration in a Gazebo-based environment, validating the framework's readiness for field application.Comment: 8 pages, 7 figures, submission to 2017 International Conference on Robotics & Automatio

    Exploiting Heterogeneous Robotic Systems in Cooperative Missions

    Full text link
    In this paper we consider the problem of coordinating robotic systems with different kinematics, sensing and vision capabilities to achieve certain mission goals. An approach that makes use of a heterogeneous team of agents has several advantages when cost, integration of capabilities, or large search areas need to be considered. A heterogeneous team allows for the robots to become "specialized", accomplish sub-goals more effectively, and thus increase the overall mission efficiency. Two main scenarios are considered in this work. In the first case study we exploit mobility to implement a power control algorithm that increases the Signal to Interference plus Noise Ratio (SINR) among certain members of the network. We create realistic sensing fields and manipulation by using the geometric properties of the sensor field-of-view and the manipulability metric, respectively. The control strategy for each agent of the heterogeneous system is governed by an artificial physics law that considers the different kinematics of the agents and the environment, in a decentralized fashion. Through simulation results we show that the network is able to stay connected at all times and covers the environment well. The second scenario studied in this paper is the biologically-inspired coordination of heterogeneous physical robotic systems. A team of ground rovers, designed to emulate desert seed-harvester ants, explore an experimental area using behaviors fine-tuned in simulation by a genetic algorithm. Our robots coordinate with a base station and collect clusters of resources scattered within the experimental space. We demonstrate experimentally that through coordination with an aerial vehicle, our ant-like ground robots are able to collect resources two times faster than without the use of heterogeneous coordination

    A Formal Framework for Mobile Robot Patrolling in Arbitrary Environments with Adversaries

    Full text link
    Using mobile robots for autonomous patrolling of environments to prevent intrusions is a topic of increasing practical relevance. One of the most challenging scientific issues is the problem of finding effective patrolling strategies that, at each time point, determine the next moves of the patrollers in order to maximize some objective function. In the very last years this problem has been addressed in a game theoretical fashion, explicitly considering the presence of an adversarial intruder. The general idea is that of modeling a patrolling situation as a game, played by the patrollers and the intruder, and of studying the equilibria of this game to derive effective patrolling strategies. In this paper we present a game theoretical formal framework for the determination of effective patrolling strategies that extends the previous proposals appeared in the literature, by considering environments with arbitrary topology and arbitrary preferences for the agents. The main original contributions of this paper are the formulation of the patrolling game for generic graph environments, an algorithm for finding a deterministic equilibrium strategy, which is a fixed path through the vertices of the graph, and an algorithm for finding a non-deterministic equilibrium strategy, which is a set of probabilities for moving between adjacent vertices of the graph. Both the algorithms are analytically studied and experimentally validated, to assess their properties and efficiency

    Learning of Coordination Policies for Robotic Swarms

    Full text link
    Inspired by biological swarms, robotic swarms are envisioned to solve real-world problems that are difficult for individual agents. Biological swarms can achieve collective intelligence based on local interactions and simple rules; however, designing effective distributed policies for large-scale robotic swarms to achieve a global objective can be challenging. Although it is often possible to design an optimal centralized strategy for smaller numbers of agents, those methods can fail as the number of agents increases. Motivated by the growing success of machine learning, we develop a deep learning approach that learns distributed coordination policies from centralized policies. In contrast to traditional distributed control approaches, which are usually based on human-designed policies for relatively simple tasks, this learning-based approach can be adapted to more difficult tasks. We demonstrate the efficacy of our proposed approach on two different tasks, the well-known rendezvous problem and a more difficult particle assignment problem. For the latter, no known distributed policy exists. From extensive simulations, it is shown that the performance of the learned coordination policies is comparable to the centralized policies, surpassing state-of-the-art distributed policies. Thereby, our proposed approach provides a promising alternative for real-world coordination problems that would be otherwise computationally expensive to solve or intangible to explore.Comment: 8 pages, 11 figures, submitted to 2018 IEEE International Conference on Robotics and Automatio

    Area Protection in Adversarial Path-Finding Scenarios with Multiple Mobile Agents on Graphs: a theoretical and experimental study of target-allocation strategies for defense coordination

    Full text link
    We address a problem of area protection in graph-based scenarios with multiple agents. The problem consists of two adversarial teams of agents that move in an undirected graph shared by both teams. Agents are placed in vertices of the graph; at most one agent can occupy a vertex; and they can move into adjacent vertices in a conflict free way. Teams have asymmetric goals: the aim of one team - attackers - is to invade into given area while the aim of the opponent team - defenders - is to protect the area from being entered by attackers by occupying selected vertices. We study strategies for allocating vertices to be occupied by the team of defenders to block attacking agents. We show that the decision version of the problem of area protection is PSPACE-hard under the assumption that agents can allocate their target vertices multiple times. Further we develop various on-line vertex-allocation strategies for the defender team in a simplified variant of the problem with single stage vertex allocation and evaluated their performance in multiple benchmarks. The success of a strategy is heavily dependent on the type of the instance, and so one of the contributions of this work is that we identify suitable vertex-allocation strategies for diverse instance types. In particular, we introduce a simulation-based method that identifies and tries to capture bottlenecks in the graph, that are frequently used by the attackers. Our experimental evaluation suggests that this method often allows a successful defense even in instances where the attackers significantly outnumber the defenders

    Decentralized Ergodic Control: Distribution-Driven Sensing and Exploration for Multi-Agent Systems

    Full text link
    We present a decentralized ergodic control policy for time-varying area coverage problems for multiple agents with nonlinear dynamics. Ergodic control allows us to specify distributions as objectives for area coverage problems for nonlinear robotic systems as a closed-form controller. We derive a variation to the ergodic control policy that can be used with consensus to enable a fully decentralized multi-agent control policy. Examples are presented to illustrate the applicability of our method for multi-agent terrain mapping as well as target localization. An analysis on ergodic policies as a Nash equilibrium is provided for game theoretic applications.Comment: 8 pages, Accepted for publication in IEEE Robotics and Automation Letter

    Some comparisons between the Variational rationality, Habitual domain, and DMCS approaches

    Full text link
    The "Habitual domain" (HD) approach and the "Variational rationality" (VR) approach belong to the same strongly interdisciplinary and very dispersed area of research: human stability and change dynamics (see Soubeyran, 2009, 2010, for an extended survey), including physiological, physical, psychological and strategic aspects, in Psychology, Economics, Management Sciences, Decision theory, Game theory, Sociology, Philosophy, Artificial Intelligence,.... These two approaches are complementary. They have strong similarities and strong differences. They focus attention on both similar and different stay and change problems, using different concepts and different mathematical tools. When they use similar concepts (a lot), they often have different meaning. We can compare them with respect to the problems and topics they consider, the behavioral principles they use, the concepts they modelize, the mathematical tools they use, and their results.Comment: 31 page

    Distributed Cohesive Control for Robot Swarms: Maintaining Good Connectivity in the Presence of Exterior Forces

    Full text link
    We present a number of powerful local mechanisms for maintaining a dynamic swarm of robots with limited capabilities and information, in the presence of external forces and permanent node failures. We propose a set of local continuous algorithms that together produce a generalization of a Euclidean Steiner tree. At any stage, the resulting overall shape achieves a good compromise between local thickness, global connectivity, and flexibility to further continuous motion of the terminals. The resulting swarm behavior scales well, is robust against node failures, and performs close to the best known approximation bound for a corresponding centralized static optimization problem

    The Automated Mapping of Plans for Plan Recognition

    Full text link
    To coordinate with other agents in its environment, an agent needs models of what the other agents are trying to do. When communication is impossible or expensive, this information must be acquired indirectly via plan recognition. Typical approaches to plan recognition start with a specification of the possible plans the other agents may be following, and develop special techniques for discriminating among the possibilities. Perhaps more desirable would be a uniform procedure for mapping plans to general structures supporting inference based on uncertain and incomplete observations. In this paper, we describe a set of methods for converting plans represented in a flexible procedural language to observation models represented as probabilistic belief networks.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994

    A Framework for learning multi-agent dynamic formation strategy in real-time applications

    Full text link
    Formation strategy is one of the most important parts of many multi-agent systems with many applications in real world problems. In this paper, a framework for learning this task in a limited domain (restricted environment) is proposed. In this framework, agents learn either directly by observing an expert behavior or indirectly by observing other agents or objects behavior. First, a group of algorithms for learning formation strategy based on limited features will be presented. Due to distributed and complex nature of many multi-agent systems, it is impossible to include all features directly in the learning process; thus, a modular scheme is proposed in order to reduce the number of features. In this method, some important features have indirect influence in learning instead of directly involving them as input features. This framework has the ability to dynamically assign a group of positions to a group of agents to improve system performance. In addition, it can change the formation strategy when the context changes. Finally, this framework is able to automatically produce many complex and flexible formation strategy algorithms without directly involving an expert to present and implement such complex algorithms.Comment: 27 pages, 9 figure
    corecore