15 research outputs found
Asymptotically Truthful Equilibrium Selection in Large Congestion Games
Studying games in the complete information model makes them analytically
tractable. However, large player interactions are more realistically
modeled as games of incomplete information, where players may know little to
nothing about the types of other players. Unfortunately, games in incomplete
information settings lose many of the nice properties of complete information
games: the quality of equilibria can become worse, the equilibria lose their
ex-post properties, and coordinating on an equilibrium becomes even more
difficult. Because of these problems, we would like to study games of
incomplete information, but still implement equilibria of the complete
information game induced by the (unknown) realized player types.
This problem was recently studied by Kearns et al. and solved in large games
by means of introducing a weak mediator: their mediator took as input reported
types of players, and output suggested actions which formed a correlated
equilibrium of the underlying game. Players had the option to play
independently of the mediator, or ignore its suggestions, but crucially, if
they decided to opt-in to the mediator, they did not have the power to lie
about their type. In this paper, we rectify this deficiency in the setting of
large congestion games. We give, in a sense, the weakest possible mediator: it
cannot enforce participation, verify types, or enforce its suggestions.
Moreover, our mediator implements a Nash equilibrium of the complete
information game. We show that it is an (asymptotic) ex-post equilibrium of the
incomplete information game for all players to use the mediator honestly, and
that when they do so, they end up playing an approximate Nash equilibrium of
the induced complete information game. In particular, truthful use of the
mediator is a Bayes-Nash equilibrium in any Bayesian game for any prior.Comment: The conference version of this paper appeared in EC 2014. This
manuscript has been merged and subsumed by the preprint "Robust Mediators in
Large Games": http://arxiv.org/abs/1512.0269
Approximately Stable, School Optimal, and Student-Truthful Many-to-One Matchings (via Differential Privacy)
We present a mechanism for computing asymptotically stable school optimal
matchings, while guaranteeing that it is an asymptotic dominant strategy for
every student to report their true preferences to the mechanism. Our main tool
in this endeavor is differential privacy: we give an algorithm that coordinates
a stable matching using differentially private signals, which lead to our
truthfulness guarantee. This is the first setting in which it is known how to
achieve nontrivial truthfulness guarantees for students when computing school
optimal matchings, assuming worst- case preferences (for schools and students)
in large markets
Buying Private Data without Verification
We consider the problem of designing a survey to aggregate non-verifiable
information from a privacy-sensitive population: an analyst wants to compute
some aggregate statistic from the private bits held by each member of a
population, but cannot verify the correctness of the bits reported by
participants in his survey. Individuals in the population are strategic agents
with a cost for privacy, \ie, they not only account for the payments they
expect to receive from the mechanism, but also their privacy costs from any
information revealed about them by the mechanism's outcome---the computed
statistic as well as the payments---to determine their utilities. How can the
analyst design payments to obtain an accurate estimate of the population
statistic when individuals strategically decide both whether to participate and
whether to truthfully report their sensitive information?
We design a differentially private peer-prediction mechanism that supports
accurate estimation of the population statistic as a Bayes-Nash equilibrium in
settings where agents have explicit preferences for privacy. The mechanism
requires knowledge of the marginal prior distribution on bits , but does
not need full knowledge of the marginal distribution on the costs ,
instead requiring only an approximate upper bound. Our mechanism guarantees
-differential privacy to each agent against any adversary who can
observe the statistical estimate output by the mechanism, as well as the
payments made to the other agents . Finally, we show that with
slightly more structured assumptions on the privacy cost functions of each
agent, the cost of running the survey goes to as the number of agents
diverges.Comment: Appears in EC 201
Privacy-Preserving Public Information for Sequential Games
In settings with incomplete information, players can find it difficult to
coordinate to find states with good social welfare. For example, in financial
settings, if a collection of financial firms have limited information about
each other's strategies, some large number of them may choose the same
high-risk investment in hopes of high returns. While this might be acceptable
in some cases, the economy can be hurt badly if many firms make investments in
the same risky market segment and it fails. One reason why many firms might end
up choosing the same segment is that they do not have information about other
firms' investments (imperfect information may lead to `bad' game states).
Directly reporting all players' investments, however, raises confidentiality
concerns for both individuals and institutions.
In this paper, we explore whether information about the game-state can be
publicly announced in a manner that maintains the privacy of the actions of the
players, and still suffices to deter players from reaching bad game-states. We
show that in many games of interest, it is possible for players to avoid these
bad states with the help of privacy-preserving, publicly-announced information.
We model behavior of players in this imperfect information setting in two ways
-- greedy and undominated strategic behaviours, and we prove guarantees on
social welfare that certain kinds of privacy-preserving information can help
attain. Furthermore, we design a counter with improved privacy guarantees under
continual observation
Private Pareto Optimal Exchange
We consider the problem of implementing an individually rational,
asymptotically Pareto optimal allocation in a barter-exchange economy where
agents are endowed with goods and have preferences over the goods of others,
but may not use money as a medium of exchange. Because one of the most
important instantiations of such economies is kidney exchange -- where the
"input"to the problem consists of sensitive patient medical records -- we ask
to what extent such exchanges can be carried out while providing formal privacy
guarantees to the participants. We show that individually rational allocations
cannot achieve any non-trivial approximation to Pareto optimality if carried
out under the constraint of differential privacy -- or even the relaxation of
\emph{joint} differential privacy, under which it is known that asymptotically
optimal allocations can be computed in two-sided markets, where there is a
distinction between buyers and sellers and we are concerned only with privacy
of the buyers~\citep{Matching}. We therefore consider a further relaxation that
we call \emph{marginal} differential privacy -- which promises, informally,
that the privacy of every agent is protected from every other agent so long as does not collude or share allocation information with other
agents. We show that, under marginal differential privacy, it is possible to
compute an individually rational and asymptotically Pareto optimal allocation
in such exchange economies