4 research outputs found
A risk-security tradeoff in graphical coordination games
A system relying on the collective behavior of decision-makers can be
vulnerable to a variety of adversarial attacks. How well can a system operator
protect performance in the face of these risks? We frame this question in the
context of graphical coordination games, where the agents in a network choose
among two conventions and derive benefits from coordinating neighbors, and
system performance is measured in terms of the agents' welfare. In this paper,
we assess an operator's ability to mitigate two types of adversarial attacks -
1) broad attacks, where the adversary incentivizes all agents in the network
and 2) focused attacks, where the adversary can force a selected subset of the
agents to commit to a prescribed convention. As a mitigation strategy, the
system operator can implement a class of distributed algorithms that govern the
agents' decision-making process. Our main contribution characterizes the
operator's fundamental trade-off between security against worst-case broad
attacks and vulnerability from focused attacks. We show that this tradeoff
significantly improves when the operator selects a decision-making process at
random. Our work highlights the design challenges a system operator faces in
maintaining resilience of networked distributed systems.Comment: 13 pages, double column, 4 figures. Submitted for journal publicatio