19 research outputs found
Imitative Follower Deception in Stackelberg Games
Information uncertainty is one of the major challenges facing applications of
game theory. In the context of Stackelberg games, various approaches have been
proposed to deal with the leader's incomplete knowledge about the follower's
payoffs, typically by gathering information from the leader's interaction with
the follower. Unfortunately, these approaches rely crucially on the assumption
that the follower will not strategically exploit this information asymmetry,
i.e., the follower behaves truthfully during the interaction according to their
actual payoffs. As we show in this paper, the follower may have strong
incentives to deceitfully imitate the behavior of a different follower type
and, in doing this, benefit significantly from inducing the leader into
choosing a highly suboptimal strategy. This raises a fundamental question: how
to design a leader strategy in the presence of a deceitful follower? To answer
this question, we put forward a basic model of Stackelberg games with
(imitative) follower deception and show that the leader is indeed able to
reduce the loss due to follower deception with carefully designed policies. We
then provide a systematic study of the problem of computing the optimal leader
policy and draw a relatively complete picture of the complexity landscape;
essentially matching positive and negative complexity results are provided for
natural variants of the model. Our intractability results are in sharp contrast
to the situation with no deception, where the leader's optimal strategy can be
computed in polynomial time, and thus illustrate the intrinsic difficulty of
handling follower deception. Through simulations we also examine the benefit of
considering follower deception in randomly generated games
Access to Population-Level Signaling as a Source of Inequality
We identify and explore differential access to population-level signaling
(also known as information design) as a source of unequal access to
opportunity. A population-level signaler has potentially noisy observations of
a binary type for each member of a population and, based on this, produces a
signal about each member. A decision-maker infers types from signals and
accepts those individuals whose type is high in expectation. We assume the
signaler of the disadvantaged population reveals her observations to the
decision-maker, whereas the signaler of the advantaged population forms signals
strategically. We study the expected utility of the populations as measured by
the fraction of accepted members, as well as the false positive rates (FPR) and
false negative rates (FNR).
We first show the intuitive results that for a fixed environment, the
advantaged population has higher expected utility, higher FPR, and lower FNR,
than the disadvantaged one (despite having identical population quality), and
that more accurate observations improve the expected utility of the advantaged
population while harming that of the disadvantaged one. We next explore the
introduction of a publicly-observable signal, such as a test score, as a
potential intervention. Our main finding is that this natural intervention,
intended to reduce the inequality between the populations' utilities, may
actually exacerbate it in settings where observations and test scores are
noisy
Algorithmic Bayesian Persuasion
Persuasion, defined as the act of exploiting an informational advantage in
order to effect the decisions of others, is ubiquitous. Indeed, persuasive
communication has been estimated to account for almost a third of all economic
activity in the US. This paper examines persuasion through a computational
lens, focusing on what is perhaps the most basic and fundamental model in this
space: the celebrated Bayesian persuasion model of Kamenica and Gentzkow. Here
there are two players, a sender and a receiver. The receiver must take one of a
number of actions with a-priori unknown payoff, and the sender has access to
additional information regarding the payoffs. The sender can commit to
revealing a noisy signal regarding the realization of the payoffs of various
actions, and would like to do so as to maximize her own payoff assuming a
perfectly rational receiver.
We examine the sender's optimization task in three of the most natural input
models for this problem, and essentially pin down its computational complexity
in each. When the payoff distributions of the different actions are i.i.d. and
given explicitly, we exhibit a polynomial-time (exact) algorithm, and a
"simple" -approximation algorithm. Our optimal scheme for the i.i.d.
setting involves an analogy to auction theory, and makes use of Border's
characterization of the space of reduced-forms for single-item auctions. When
action payoffs are independent but non-identical with marginal distributions
given explicitly, we show that it is #P-hard to compute the optimal expected
sender utility. Finally, we consider a general (possibly correlated) joint
distribution of action payoffs presented by a black box sampling oracle, and
exhibit a fully polynomial-time approximation scheme (FPTAS) with a bi-criteria
guarantee. We show that this result is the best possible in the black-box model
for information-theoretic reasons
Deception through Half-Truths
Deception is a fundamental issue across a diverse array of settings, from
cybersecurity, where decoys (e.g., honeypots) are an important tool, to
politics that can feature politically motivated "leaks" and fake news about
candidates.Typical considerations of deception view it as providing false
information.However, just as important but less frequently studied is a more
tacit form where information is strategically hidden or leaked.We consider the
problem of how much an adversary can affect a principal's decision by
"half-truths", that is, by masking or hiding bits of information, when the
principal is oblivious to the presence of the adversary. The principal's
problem can be modeled as one of predicting future states of variables in a
dynamic Bayes network, and we show that, while theoretically the principal's
decisions can be made arbitrarily bad, the optimal attack is NP-hard to
approximate, even under strong assumptions favoring the attacker. However, we
also describe an important special case where the dependency of future states
on past states is additive, in which we can efficiently compute an
approximately optimal attack. Moreover, in networks with a linear transition
function we can solve the problem optimally in polynomial time
Access to Population-Level Signaling as a Source of Inequality
We identify and explore differential access to population-level signaling (also known as information design) as a source of unequal access to opportunity. A population-level signaler has potentially noisy observations of a binary type for each member of a population and, based on this, produces a signal about each member. A decision-maker infers types from signals and accepts those individuals whose type is high in expectation. We assume the signaler of the disadvantaged population reveals her observations to the decision-maker, whereas the signaler of the advantaged population forms signals strategically. We study the expected utility of the populations as measured by the fraction of accepted members, as well as the false positive rates (FPR) and false negative rates (FNR).
We first show the intuitive results that for a fixed environment, the advantaged population has higher expected utility, higher FPR, and lower FNR, than the disadvantaged one (despite having identical population quality), and that more accurate observations improve the expected utility of the advantaged population while harming that of the disadvantaged one. We next explore the introduction of a publicly-observable signal, such as a test score, as a potential intervention. Our main finding is that this natural intervention, intended to reduce the inequality between the populations' utilities, may actually exacerbate it in settings where observations and test scores are noisy