3,643 research outputs found
Mixture Selection, Mechanism Design, and Signaling
We pose and study a fundamental algorithmic problem which we term mixture
selection, arising as a building block in a number of game-theoretic
applications: Given a function from the -dimensional hypercube to the
bounded interval , and an matrix with bounded entries,
maximize over in the -dimensional simplex. This problem arises
naturally when one seeks to design a lottery over items for sale in an auction,
or craft the posterior beliefs for agents in a Bayesian game through the
provision of information (a.k.a. signaling).
We present an approximation algorithm for this problem when
simultaneously satisfies two smoothness properties: Lipschitz continuity with
respect to the norm, and noise stability. The latter notion, which
we define and cater to our setting, controls the degree to which
low-probability errors in the inputs of can impact its output. When is
both -Lipschitz continuous and -stable, we obtain an (additive)
PTAS for mixture selection. We also show that neither assumption suffices by
itself for an additive PTAS, and both assumptions together do not suffice for
an additive FPTAS.
We apply our algorithm to different game-theoretic applications from
mechanism design and optimal signaling. We make progress on a number of open
problems suggested in prior work by easily reducing them to mixture selection:
we resolve an important special case of the small-menu lottery design problem
posed by Dughmi, Han, and Nisan; we resolve the problem of revenue-maximizing
signaling in Bayesian second-price auctions posed by Emek et al. and Miltersen
and Sheffet; we design a quasipolynomial-time approximation scheme for the
optimal signaling problem in normal form games suggested by Dughmi; and we
design an approximation algorithm for the optimal signaling problem in the
voting model of Alonso and C\^{a}mara
On Similarities between Inference in Game Theory and Machine Learning
In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution)
On the Hardness of Signaling
There has been a recent surge of interest in the role of information in
strategic interactions. Much of this work seeks to understand how the realized
equilibrium of a game is influenced by uncertainty in the environment and the
information available to players in the game. Lurking beneath this literature
is a fundamental, yet largely unexplored, algorithmic question: how should a
"market maker" who is privy to additional information, and equipped with a
specified objective, inform the players in the game? This is an informational
analogue of the mechanism design question, and views the information structure
of a game as a mathematical object to be designed, rather than an exogenous
variable.
We initiate a complexity-theoretic examination of the design of optimal
information structures in general Bayesian games, a task often referred to as
signaling. We focus on one of the simplest instantiations of the signaling
question: Bayesian zero-sum games, and a principal who must choose an
information structure maximizing the equilibrium payoff of one of the players.
In this setting, we show that optimal signaling is computationally intractable,
and in some cases hard to approximate, assuming that it is hard to recover a
planted clique from an Erdos-Renyi random graph. This is despite the fact that
equilibria in these games are computable in polynomial time, and therefore
suggests that the hardness of optimal signaling is a distinct phenomenon from
the hardness of equilibrium computation. Necessitated by the non-local nature
of information structures, en-route to our results we prove an "amplification
lemma" for the planted clique problem which may be of independent interest
On the Structure of Equilibrium Strategies in Dynamic Gaussian Signaling Games
This paper analyzes a finite horizon dynamic signaling game motivated by the
well-known strategic information transmission problems in economics. The
mathematical model involves information transmission between two agents, a
sender who observes two Gaussian processes, state and bias, and a receiver who
takes an action based on the received message from the sender. The players
incur quadratic instantaneous costs as functions of the state, bias and action
variables. Our particular focus is on the Stackelberg equilibrium, which
corresponds to information disclosure and Bayesian persuasion problems in
economics. Prior work solved the static game, and showed that the Stackelberg
equilibrium is achieved by pure strategies that are linear functions of the
state and the bias variables. The main focus of this work is on the dynamic
(multi-stage) setting, where we show that the existence of a pure strategy
Stackelberg equilibrium, within the set of linear strategies, depends on the
problem parameters. Surprisingly, for most problem parameters, a pure linear
strategy does not achieve the Stackelberg equilibrium which implies the
existence of a trade-off between exploiting and revealing information, which
was also encountered in several other asymmetric information games.Comment: will appear in IEEE Multi-Conference on Systems and Control 201
- …