17,223 research outputs found
A risk-security tradeoff in graphical coordination games
A system relying on the collective behavior of decision-makers can be
vulnerable to a variety of adversarial attacks. How well can a system operator
protect performance in the face of these risks? We frame this question in the
context of graphical coordination games, where the agents in a network choose
among two conventions and derive benefits from coordinating neighbors, and
system performance is measured in terms of the agents' welfare. In this paper,
we assess an operator's ability to mitigate two types of adversarial attacks -
1) broad attacks, where the adversary incentivizes all agents in the network
and 2) focused attacks, where the adversary can force a selected subset of the
agents to commit to a prescribed convention. As a mitigation strategy, the
system operator can implement a class of distributed algorithms that govern the
agents' decision-making process. Our main contribution characterizes the
operator's fundamental trade-off between security against worst-case broad
attacks and vulnerability from focused attacks. We show that this tradeoff
significantly improves when the operator selects a decision-making process at
random. Our work highlights the design challenges a system operator faces in
maintaining resilience of networked distributed systems.Comment: 13 pages, double column, 4 figures. Submitted for journal publicatio
Cycles in adversarial regularized learning
Regularized learning is a fundamental technique in online optimization,
machine learning and many other fields of computer science. A natural question
that arises in these settings is how regularized learning algorithms behave
when faced against each other. We study a natural formulation of this problem
by coupling regularized learning dynamics in zero-sum games. We show that the
system's behavior is Poincar\'e recurrent, implying that almost every
trajectory revisits any (arbitrarily small) neighborhood of its starting point
infinitely often. This cycling behavior is robust to the agents' choice of
regularization mechanism (each agent could be using a different regularizer),
to positive-affine transformations of the agents' utilities, and it also
persists in the case of networked competition, i.e., for zero-sum polymatrix
games.Comment: 22 pages, 4 figure
- …