6 research outputs found

    Semi-Informed Multi-Agent Patrol Strategies

    Get PDF
    The adversarial multi-agent patrol problem is an active research topic with many real-world applications such as physical robots guarding an area and software agents protecting a computer network. In it, agents patrol a graph looking for so-called critical vertices that are subject to attack by adversaries. The agents are unaware of which vertices are subject to attack by adversaries and when they encounter such a vertex they attempt to protect it from being compromised (an adversary must occupy the vertex it targets a certain amount of time for the attack to succeed). Even though the terms adversary and attack are used, the problem domain extends to patrolling a graph for other interesting noncompetitive contexts such as search and rescue. The problem statement adopted in this work is formulated such that agents obtain knowledge of local graph topology and critical vertices over the course of their travels via an API ; there is no global knowledge of the graph or communication between agents. The challenge is to balance exploration, necessary to discover critical vertices, with exploitation, necessary to protect critical vertices from attack. Four types of adversaries were used for experiments, three from previous research – waiting, random, and statistical - and the fourth, a hybrid of those three. Agent strategies for countering each of these adversaries are designed and evaluated. Benchmark graphs and parameter settings from related research will be employed. The proposed research culminates in the design and evaluation of agents to counter these various types of adversaries under a range of conditions. The results of this work are agent strategies in which each agent becomes solely responsible for protecting those critical vertices it discovers. The agents use emergent behavior to minimize successful attacks and maximize the discovery of new critical vertices. A set of seven edge choosing primitives (ECPs) are defined that are combined in different ways to yield a range of agent strategies using the chain of responsibility OOP design pattern. Every permutation of them were tested and measured in order to identify those strategies that perform well. One strategy performed particularly well against all adversaries, graph topology, and other experimental variables. This particular strategy combines ECPs of: A hard-deadline return to covered vertices to counter the random adversary, efficiently checking vertices to see if they are being attacked by the waiting adversary, and random movement to impede the statistical adversary

    Adversarial patrolling with spatially uncertain alarm signals

    Get PDF
    When securing complex infrastructures or large environments, constant surveillance of every area is not affordable. To cope with this issue, a common countermeasure is the usage of cheap but wide-ranged sensors, able to detect suspicious events that occur in large areas, supporting patrollers to improve the effectiveness of their strategies. However, such sensors are commonly affected by uncertainty. In the present paper, we focus on spatially uncertain alarm signals. That is, the alarm system is able to detect an attack but it is uncertain on the exact position where the attack is taking place. This is common when the area to be secured is wide, such as in border patrolling and fair site surveillance. We propose, to the best of our knowledge, the first Patrolling Security Game where a Defender is supported by a spatially uncertain alarm system, which non-deterministically generates signals once a target is under attack. We show that finding the optimal strategy is FNP-hard even in tree graphs and APX-hard in arbitrary graphs. We provide two (exponential time) exact algorithms and two (polynomial time) approximation algorithms. Finally, we show that, without false positives and missed detections, the best patrolling strategy reduces to stay in a place, wait for a signal, and respond to it at best. This strategy is optimal even with non-negligible missed detection rates, which, unfortunately, affect every commercial alarm system. We evaluate our methods in simulation, assessing both quantitative and qualitative aspects

    Patrolling security games: Definition and algorithms for solving largeinstances with single patroller and single intruder

    Get PDF
    Security games are gaining significant interest in artificial intelligence. They are characterized by two players (a defender and an attacker) and by a set of targets the defender tries to protect from the attacker\u2bcs intrusions by committing to a strategy. To reach their goals, players use resources such as patrollers and intruders. Security games are Stackelberg games where the appropriate solution concept is the leader\u2013follower equilibrium. Current algorithms for solving these games are applicable when the underlying game is in normal form (i.e., each player has a single decision node). In this paper, we define and study security games with an extensive-form infinite-horizon underlying game, where decision nodes are potentially infinite. We introduce a novel scenario where the attacker can undertake actions during the execution of the defender\u2bcs strategy. We call this new game class patrolling security games (PSGs), since its most prominent application is patrolling environments against intruders. We show that PSGs cannot be reduced to security games studied so far and we highlight their generality in tackling adversarial patrolling on arbitrary graphs. We then design algorithms to solve large instances with single patroller and single intruder
    corecore