1,720 research outputs found

    A decomposition technique for pursuit evasion games with many pursuers

    Full text link
    Here we present a decomposition technique for a class of differential games. The technique consists in a decomposition of the target set which produces, for geometrical reasons, a decomposition in the dimensionality of the problem. Using some elements of Hamilton-Jacobi equations theory, we find a relation between the regularity of the solution and the possibility to decompose the problem. We use this technique to solve a pursuit evasion game with multiple agents

    Decomposition of Differential Games

    Full text link
    This paper provides a decomposition technique for the purpose of simplifying the solution of certain zero-sum differential games. The games considered terminate when the state reaches a target, which can be expressed as the union of a collection of target subsets; the decomposition consists of replacing the original target by each of the target subsets. The value of the original game is then obtained as the lower envelope of the values of the collection of games resulting from the decomposition, which can be much easier to solve than the original game. Criteria are given for the validity of the decomposition. The paper includes examples, illustrating the application of the technique to pursuit/evasion games, where the decomposition arises from considering the interaction of individual pursuer/evader pairs.Comment: 10 pages, 2 figure

    Deep Reinforcement Learning for Swarm Systems

    Full text link
    Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, these methods rely on a concatenation of agent states to represent the information content required for decentralized decision making. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions. We treat the agents as samples of a distribution and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and a neural network learned end-to-end. We evaluate the representation on two well known problems from the swarm literature (rendezvous and pursuit evasion), in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents facilitating the development of more complex collective strategies.Comment: 31 pages, 12 figures, version 3 (published in JMLR Volume 20

    Two-Dimensional Pursuit-Evasion in a Compact Domain with Piecewise Analytic Boundary

    Full text link
    In a pursuit-evasion game, a team of pursuers attempt to capture an evader. The players alternate turns, move with equal speed, and have full information about the state of the game. We consider the most restictive capture condition: a pursuer must become colocated with the evader to win the game. We prove two general results about pursuit-evasion games in topological spaces. First, we show that one pursuer has a winning strategy in any CAT(0) space under this restrictive capture criterion. This complements a result of Alexander, Bishop and Ghrist, who provide a winning strategy for a game with positive capture radius. Second, we consider the game played in a compact domain in Euclidean two-space with piecewise analytic boundary and arbitrary Euler characteristic. We show that three pursuers always have a winning strategy by extending recent work of Bhadauria, Klein, Isler and Suri from polygonal environments to our more general setting.Comment: 21 pages, 6 figure
    • …
    corecore