2 research outputs found

    Are Multiagent Systems Resilient to Communication Failures?

    Full text link
    A challenge in multiagent control systems is to ensure that they are appropriately resilient to communication failures between the various agents. In many common game-theoretic formulations of these types of systems, it is implicitly assumed that all agents have access to as much information about other agents' actions as needed. This paper endeavors to augment these game-theoretic methods with policies that would allow agents to react on-the-fly to losses of this information. Unfortunately, we show that even if a single agent loses communication with one other weakly-coupled agent, this can cause arbitrarily-bad system states to emerge as various solution concepts of an associated game, regardless of how the agent accounts for the communication failure and regardless of how weakly coupled the agents are. Nonetheless, we show that the harm that communication failures can cause is limited by the structure of the problem; when agents' action spaces are richer, problems are more susceptible to these types of pathologies. Finally, we undertake an initial study into how a system designer might prevent these pathologies, and explore a few limited settings in which communication failures cannot cause harm

    The Price of Anarchy is Fragile in Single-Selection Coverage Games

    Full text link
    This paper considers coverage games in which a group of agents are tasked with identifying the highest-value subset of resources; in this context, game-theoretic approaches are known to yield Nash equilibria within a factor of 2 of optimal. We consider the case that some of the agents suffer a communication failure and cannot observe the actions of other agents; in this case, recent work has shown that if there are k>0 compromised agents, Nash equilibria are only guaranteed to be within a factor of k+1 of optimal. However, the present paper shows that this worst-case guarantee is fragile; in a sense which we make precise, we show that if a problem instance has a very poor worst-case guarantee, then it is necessarily very "close" to a problem instance with an optimal Nash equilibrium. Conversely, an instance that is far from one with an optimal Nash equilibrium necessarily has relatively good worst-case performance guarantees. To contextualize this fragility, we perform simulations using the log-linear learning algorithm and show that average performance on worst-case instances is considerably better even than our improved analytical guarantees. This suggests that the fragility of the price of anarchy can be exploited algorithmically to compensate for online communication failures.Comment: 6 pages, 4 figure
    corecore