14,358 research outputs found
Truthful Mechanisms for Agents that Value Privacy
Recent work has constructed economic mechanisms that are both truthful and
differentially private. In these mechanisms, privacy is treated separately from
the truthfulness; it is not incorporated in players' utility functions (and
doing so has been shown to lead to non-truthfulness in some cases). In this
work, we propose a new, general way of modelling privacy in players' utility
functions. Specifically, we only assume that if an outcome has the property
that any report of player would have led to with approximately the same
probability, then has small privacy cost to player . We give three
mechanisms that are truthful with respect to our modelling of privacy: for an
election between two candidates, for a discrete version of the facility
location problem, and for a general social choice problem with discrete
utilities (via a VCG-like mechanism). As the number of players increases,
the social welfare achieved by our mechanisms approaches optimal (as a fraction
of )
Selling Privacy at Auction
We initiate the study of markets for private data, though the lens of
differential privacy. Although the purchase and sale of private data has
already begun on a large scale, a theory of privacy as a commodity is missing.
In this paper, we propose to build such a theory. Specifically, we consider a
setting in which a data analyst wishes to buy information from a population
from which he can estimate some statistic. The analyst wishes to obtain an
accurate estimate cheaply. On the other hand, the owners of the private data
experience some cost for their loss of privacy, and must be compensated for
this loss. Agents are selfish, and wish to maximize their profit, so our goal
is to design truthful mechanisms. Our main result is that such auctions can
naturally be viewed and optimally solved as variants of multi-unit procurement
auctions. Based on this result, we derive auctions for two natural settings
which are optimal up to small constant factors:
1. In the setting in which the data analyst has a fixed accuracy goal, we
show that an application of the classic Vickrey auction achieves the analyst's
accuracy goal while minimizing his total payment.
2. In the setting in which the data analyst has a fixed budget, we give a
mechanism which maximizes the accuracy of the resulting estimate while
guaranteeing that the resulting sum payments do not exceed the analysts budget.
In both cases, our comparison class is the set of envy-free mechanisms, which
correspond to the natural class of fixed-price mechanisms in our setting.
In both of these results, we ignore the privacy cost due to possible
correlations between an individuals private data and his valuation for privacy
itself. We then show that generically, no individually rational mechanism can
compensate individuals for the privacy loss incurred due to their reported
valuations for privacy.Comment: Extended Abstract appeared in the proceedings of EC 201
Buying Private Data without Verification
We consider the problem of designing a survey to aggregate non-verifiable
information from a privacy-sensitive population: an analyst wants to compute
some aggregate statistic from the private bits held by each member of a
population, but cannot verify the correctness of the bits reported by
participants in his survey. Individuals in the population are strategic agents
with a cost for privacy, \ie, they not only account for the payments they
expect to receive from the mechanism, but also their privacy costs from any
information revealed about them by the mechanism's outcome---the computed
statistic as well as the payments---to determine their utilities. How can the
analyst design payments to obtain an accurate estimate of the population
statistic when individuals strategically decide both whether to participate and
whether to truthfully report their sensitive information?
We design a differentially private peer-prediction mechanism that supports
accurate estimation of the population statistic as a Bayes-Nash equilibrium in
settings where agents have explicit preferences for privacy. The mechanism
requires knowledge of the marginal prior distribution on bits , but does
not need full knowledge of the marginal distribution on the costs ,
instead requiring only an approximate upper bound. Our mechanism guarantees
-differential privacy to each agent against any adversary who can
observe the statistical estimate output by the mechanism, as well as the
payments made to the other agents . Finally, we show that with
slightly more structured assumptions on the privacy cost functions of each
agent, the cost of running the survey goes to as the number of agents
diverges.Comment: Appears in EC 201
Truthful Linear Regression
We consider the problem of fitting a linear model to data held by individuals
who are concerned about their privacy. Incentivizing most players to truthfully
report their data to the analyst constrains our design to mechanisms that
provide a privacy guarantee to the participants; we use differential privacy to
model individuals' privacy losses. This immediately poses a problem, as
differentially private computation of a linear model necessarily produces a
biased estimation, and existing approaches to design mechanisms to elicit data
from privacy-sensitive individuals do not generalize well to biased estimators.
We overcome this challenge through an appropriate design of the computation and
payment scheme.Comment: To appear in Proceedings of the 28th Annual Conference on Learning
Theory (COLT 2015
Redrawing the Boundaries on Purchasing Data from Privacy-Sensitive Individuals
We prove new positive and negative results concerning the existence of
truthful and individually rational mechanisms for purchasing private data from
individuals with unbounded and sensitive privacy preferences. We strengthen the
impossibility results of Ghosh and Roth (EC 2011) by extending it to a much
wider class of privacy valuations. In particular, these include privacy
valuations that are based on ({\epsilon}, {\delta})-differentially private
mechanisms for non-zero {\delta}, ones where the privacy costs are measured in
a per-database manner (rather than taking the worst case), and ones that do not
depend on the payments made to players (which might not be observable to an
adversary). To bypass this impossibility result, we study a natural special
setting where individuals have mono- tonic privacy valuations, which captures
common contexts where certain values for private data are expected to lead to
higher valuations for privacy (e.g. having a particular disease). We give new
mech- anisms that are individually rational for all players with monotonic
privacy valuations, truthful for all players whose privacy valuations are not
too large, and accurate if there are not too many players with too-large
privacy valuations. We also prove matching lower bounds showing that in some
respects our mechanism cannot be improved significantly
- …