1,703 research outputs found
Algorithmic Bayesian Persuasion
Persuasion, defined as the act of exploiting an informational advantage in
order to effect the decisions of others, is ubiquitous. Indeed, persuasive
communication has been estimated to account for almost a third of all economic
activity in the US. This paper examines persuasion through a computational
lens, focusing on what is perhaps the most basic and fundamental model in this
space: the celebrated Bayesian persuasion model of Kamenica and Gentzkow. Here
there are two players, a sender and a receiver. The receiver must take one of a
number of actions with a-priori unknown payoff, and the sender has access to
additional information regarding the payoffs. The sender can commit to
revealing a noisy signal regarding the realization of the payoffs of various
actions, and would like to do so as to maximize her own payoff assuming a
perfectly rational receiver.
We examine the sender's optimization task in three of the most natural input
models for this problem, and essentially pin down its computational complexity
in each. When the payoff distributions of the different actions are i.i.d. and
given explicitly, we exhibit a polynomial-time (exact) algorithm, and a
"simple" -approximation algorithm. Our optimal scheme for the i.i.d.
setting involves an analogy to auction theory, and makes use of Border's
characterization of the space of reduced-forms for single-item auctions. When
action payoffs are independent but non-identical with marginal distributions
given explicitly, we show that it is #P-hard to compute the optimal expected
sender utility. Finally, we consider a general (possibly correlated) joint
distribution of action payoffs presented by a black box sampling oracle, and
exhibit a fully polynomial-time approximation scheme (FPTAS) with a bi-criteria
guarantee. We show that this result is the best possible in the black-box model
for information-theoretic reasons
Bayesian Persuasion for Algorithmic Recourse
When subjected to automated decision-making, decision subjects may
strategically modify their observable features in ways they believe will
maximize their chances of receiving a favorable decision. In many practical
situations, the underlying assessment rule is deliberately kept secret to avoid
gaming and maintain competitive advantage. The resulting opacity forces the
decision subjects to rely on incomplete information when making strategic
feature modifications. We capture such settings as a game of Bayesian
persuasion, in which the decision maker offers a form of recourse to the
decision subject by providing them with an action recommendation (or signal) to
incentivize them to modify their features in desirable ways. We show that when
using persuasion, the decision maker and decision subject are never worse off
in expectation, while the decision maker can be significantly better off. While
the decision maker's problem of finding the optimal Bayesian
incentive-compatible (BIC) signaling policy takes the form of optimization over
infinitely-many variables, we show that this optimization can be cast as a
linear program over finitely-many regions of the space of possible assessment
rules. While this reformulation simplifies the problem dramatically, solving
the linear program requires reasoning about exponentially-many variables, even
in relatively simple cases. Motivated by this observation, we provide a
polynomial-time approximation scheme that recovers a near-optimal signaling
policy. Finally, our numerical simulations on semi-synthetic data empirically
demonstrate the benefits of using persuasion in the algorithmic recourse
setting.Comment: In the thirty-sixth Conference on Neural Information Processing
Systems (NeurIPS 2022
Algorithmic Aspects of Private Bayesian Persuasion
We consider a multi-receivers Bayesian persuasion model where an informed sender tries to persuade a group of receivers to take a certain action. The state of nature is known to the sender, but it is unknown to the receivers. The sender is allowed to commit to a signaling policy where she sends a private signal to every receiver. This work studies the computation aspects of finding a signaling policy that maximizes the sender\u27s revenue.
We show that if the sender\u27s utility is a submodular function of the set of receivers that take the desired action, then we can efficiently find a signaling policy whose revenue is at least (1-1/e) times the optimal. We also prove that approximating the sender\u27s optimal revenue by a factor better than (1-1/e) is NP-hard and, hence, the developed approximation guarantee is essentially tight. When the sender\u27s utility is a function of the number of receivers that take the desired action (i.e., the utility function is anonymous), we show that an optimal signaling policy can be computed in polynomial time. Our results are based on an interesting connection between the Bayesian persuasion problem and the evaluation of the concave closure of a set function
Informational Substitutes
We propose definitions of substitutes and complements for pieces of
information ("signals") in the context of a decision or optimization problem,
with game-theoretic and algorithmic applications. In a game-theoretic context,
substitutes capture diminishing marginal value of information to a rational
decision maker. We use the definitions to address the question of how and when
information is aggregated in prediction markets. Substitutes characterize
"best-possible" equilibria with immediate information aggregation, while
complements characterize "worst-possible", delayed aggregation. Game-theoretic
applications also include settings such as crowdsourcing contests and Q\&A
forums. In an algorithmic context, where substitutes capture diminishing
marginal improvement of information to an optimization problem, substitutes
imply efficient approximation algorithms for a very general class of (adaptive)
information acquisition problems.
In tandem with these broad applications, we examine the structure and design
of informational substitutes and complements. They have equivalent, intuitive
definitions from disparate perspectives: submodularity, geometry, and
information theory. We also consider the design of scoring rules or
optimization problems so as to encourage substitutability or complementarity,
with positive and negative results. Taken as a whole, the results give some
evidence that, in parallel with substitutable items, informational substitutes
play a natural conceptual and formal role in game theory and algorithms.Comment: Full version of FOCS 2016 paper. Single-column, 61 pages (48 main
text, 13 references and appendix
Access to Population-Level Signaling as a Source of Inequality
We identify and explore differential access to population-level signaling
(also known as information design) as a source of unequal access to
opportunity. A population-level signaler has potentially noisy observations of
a binary type for each member of a population and, based on this, produces a
signal about each member. A decision-maker infers types from signals and
accepts those individuals whose type is high in expectation. We assume the
signaler of the disadvantaged population reveals her observations to the
decision-maker, whereas the signaler of the advantaged population forms signals
strategically. We study the expected utility of the populations as measured by
the fraction of accepted members, as well as the false positive rates (FPR) and
false negative rates (FNR).
We first show the intuitive results that for a fixed environment, the
advantaged population has higher expected utility, higher FPR, and lower FNR,
than the disadvantaged one (despite having identical population quality), and
that more accurate observations improve the expected utility of the advantaged
population while harming that of the disadvantaged one. We next explore the
introduction of a publicly-observable signal, such as a test score, as a
potential intervention. Our main finding is that this natural intervention,
intended to reduce the inequality between the populations' utilities, may
actually exacerbate it in settings where observations and test scores are
noisy
Troubles with Bayesianism: An introduction to the psychological immune system
A Bayesian mind is, at its core, a rational mind. Bayesianism is thus well-suited to predict and explain mental processes that best exemplify our ability to be rational. However, evidence from belief acquisition and change appears to show that we do not acquire and update information in a Bayesian way. Instead, the principles of belief acquisition and updating seem grounded in maintaining a psychological immune system rather than in approximating
a Bayesian processor
Mixture Selection, Mechanism Design, and Signaling
We pose and study a fundamental algorithmic problem which we term mixture
selection, arising as a building block in a number of game-theoretic
applications: Given a function from the -dimensional hypercube to the
bounded interval , and an matrix with bounded entries,
maximize over in the -dimensional simplex. This problem arises
naturally when one seeks to design a lottery over items for sale in an auction,
or craft the posterior beliefs for agents in a Bayesian game through the
provision of information (a.k.a. signaling).
We present an approximation algorithm for this problem when
simultaneously satisfies two smoothness properties: Lipschitz continuity with
respect to the norm, and noise stability. The latter notion, which
we define and cater to our setting, controls the degree to which
low-probability errors in the inputs of can impact its output. When is
both -Lipschitz continuous and -stable, we obtain an (additive)
PTAS for mixture selection. We also show that neither assumption suffices by
itself for an additive PTAS, and both assumptions together do not suffice for
an additive FPTAS.
We apply our algorithm to different game-theoretic applications from
mechanism design and optimal signaling. We make progress on a number of open
problems suggested in prior work by easily reducing them to mixture selection:
we resolve an important special case of the small-menu lottery design problem
posed by Dughmi, Han, and Nisan; we resolve the problem of revenue-maximizing
signaling in Bayesian second-price auctions posed by Emek et al. and Miltersen
and Sheffet; we design a quasipolynomial-time approximation scheme for the
optimal signaling problem in normal form games suggested by Dughmi; and we
design an approximation algorithm for the optimal signaling problem in the
voting model of Alonso and C\^{a}mara
- …