519 research outputs found
Equilibrium Computation and Robust Optimization in Zero Sum Games with Submodular Structure
We define a class of zero-sum games with combinatorial structure, where the
best response problem of one player is to maximize a submodular function. For
example, this class includes security games played on networks, as well as the
problem of robustly optimizing a submodular function over the worst case from a
set of scenarios. The challenge in computing equilibria is that both players'
strategy spaces can be exponentially large. Accordingly, previous algorithms
have worst-case exponential runtime and indeed fail to scale up on practical
instances. We provide a pseudopolynomial-time algorithm which obtains a
guaranteed -approximate mixed strategy for the maximizing player.
Our algorithm only requires access to a weakened version of a best response
oracle for the minimizing player which runs in polynomial time. Experimental
results for network security games and a robust budget allocation problem
confirm that our algorithm delivers near-optimal solutions and scales to much
larger instances than was previously possible.Comment: 20 pages, 8 figures. A shorter version of this paper appears at AAAI
201
Computing Stable Coalitions: Approximation Algorithms for Reward Sharing
Consider a setting where selfish agents are to be assigned to coalitions or
projects from a fixed set P. Each project k is characterized by a valuation
function; v_k(S) is the value generated by a set S of agents working on project
k. We study the following classic problem in this setting: "how should the
agents divide the value that they collectively create?". One traditional
approach in cooperative game theory is to study core stability with the
implicit assumption that there are infinite copies of one project, and agents
can partition themselves into any number of coalitions. In contrast, we
consider a model with a finite number of non-identical projects; this makes
computing both high-welfare solutions and core payments highly non-trivial.
The main contribution of this paper is a black-box mechanism that reduces the
problem of computing a near-optimal core stable solution to the purely
algorithmic problem of welfare maximization; we apply this to compute an
approximately core stable solution that extracts one-fourth of the optimal
social welfare for the class of subadditive valuations. We also show much
stronger results for several popular sub-classes: anonymous, fractionally
subadditive, and submodular valuations, as well as provide new approximation
algorithms for welfare maximization with anonymous functions. Finally, we
establish a connection between our setting and the well-studied simultaneous
auctions with item bidding; we adapt our results to compute approximate pure
Nash equilibria for these auctions.Comment: Under Revie
Equilibrium in Labor Markets with Few Firms
We study competition between firms in labor markets, following a
combinatorial model suggested by Kelso and Crawford [1982]. In this model, each
firm is trying to recruit workers by offering a higher salary than its
competitors, and its production function defines the utility generated from any
actual set of recruited workers. We define two natural classes of production
functions for firms, where the first one is based on additive capacities
(weights), and the second on the influence of workers in a social network. We
then analyze the existence of pure subgame perfect equilibrium (PSPE) in the
labor market and its properties. While neither class holds the gross
substitutes condition, we show that in both classes the existence of PSPE is
guaranteed under certain restrictions, and in particular when there are only
two competing firms. As a corollary, there exists a Walrasian equilibrium in a
corresponding combinatorial auction, where bidders' valuation functions belong
to these classes.
While a PSPE may not exist when there are more than two firms, we perform an
empirical study of equilibrium outcomes for the case of weight-based games with
three firms, which extend our analytical results. We then show that stability
can in some cases be extended to coalitional stability, and study the
distribution of profit between firms and their workers in weight-based games
Defending Elections Against Malicious Spread of Misinformation
The integrity of democratic elections depends on voters' access to accurate
information. However, modern media environments, which are dominated by social
media, provide malicious actors with unprecedented ability to manipulate
elections via misinformation, such as fake news. We study a zero-sum game
between an attacker, who attempts to subvert an election by propagating a fake
new story or other misinformation over a set of advertising channels, and a
defender who attempts to limit the attacker's impact. Computing an equilibrium
in this game is challenging as even the pure strategy sets of players are
exponential. Nevertheless, we give provable polynomial-time approximation
algorithms for computing the defender's minimax optimal strategy across a range
of settings, encompassing different population structures as well as models of
the information available to each player. Experimental results confirm that our
algorithms provide near-optimal defender strategies and showcase variations in
the difficulty of defending elections depending on the resources and knowledge
available to the defender.Comment: Full version of paper accepted to AAAI 201
The Evolutionary Logic of Feeling Small
In a (generalized symmetric aggregative game, payoffs depend only on individual strategy and an aggregate of all strategies. Players behaving as if they were negligible would optimize taking the aggregate as given. We provide evolutionary and dynamic foundations for such behavior when the game satisfies supermodularity conditions. The results obteined are also useful to characterize evolutionarily stable strategies in a finite population.
Informational Substitutes
We propose definitions of substitutes and complements for pieces of
information ("signals") in the context of a decision or optimization problem,
with game-theoretic and algorithmic applications. In a game-theoretic context,
substitutes capture diminishing marginal value of information to a rational
decision maker. We use the definitions to address the question of how and when
information is aggregated in prediction markets. Substitutes characterize
"best-possible" equilibria with immediate information aggregation, while
complements characterize "worst-possible", delayed aggregation. Game-theoretic
applications also include settings such as crowdsourcing contests and Q\&A
forums. In an algorithmic context, where substitutes capture diminishing
marginal improvement of information to an optimization problem, substitutes
imply efficient approximation algorithms for a very general class of (adaptive)
information acquisition problems.
In tandem with these broad applications, we examine the structure and design
of informational substitutes and complements. They have equivalent, intuitive
definitions from disparate perspectives: submodularity, geometry, and
information theory. We also consider the design of scoring rules or
optimization problems so as to encourage substitutability or complementarity,
with positive and negative results. Taken as a whole, the results give some
evidence that, in parallel with substitutable items, informational substitutes
play a natural conceptual and formal role in game theory and algorithms.Comment: Full version of FOCS 2016 paper. Single-column, 61 pages (48 main
text, 13 references and appendix
Utility Design for Distributed Resource Allocation -- Part I: Characterizing and Optimizing the Exact Price of Anarchy
Game theory has emerged as a fruitful paradigm for the design of networked
multiagent systems. A fundamental component of this approach is the design of
agents' utility functions so that their self-interested maximization results in
a desirable collective behavior. In this work we focus on a well-studied class
of distributed resource allocation problems where each agent is requested to
select a subset of resources with the goal of optimizing a given system-level
objective. Our core contribution is the development of a novel framework to
tightly characterize the worst case performance of any resulting Nash
equilibrium (price of anarchy) as a function of the chosen agents' utility
functions. Leveraging this result, we identify how to design such utilities so
as to optimize the price of anarchy through a tractable linear program. This
provides us with a priori performance certificates applicable to any existing
learning algorithm capable of driving the system to an equilibrium. Part II of
this work specializes these results to submodular and supermodular objectives,
discusses the complexity of computing Nash equilibria, and provides multiple
illustrations of the theoretical findings.Comment: 15 pages, 5 figure
- ā¦