1,256 research outputs found
Selling Privacy at Auction
We initiate the study of markets for private data, though the lens of
differential privacy. Although the purchase and sale of private data has
already begun on a large scale, a theory of privacy as a commodity is missing.
In this paper, we propose to build such a theory. Specifically, we consider a
setting in which a data analyst wishes to buy information from a population
from which he can estimate some statistic. The analyst wishes to obtain an
accurate estimate cheaply. On the other hand, the owners of the private data
experience some cost for their loss of privacy, and must be compensated for
this loss. Agents are selfish, and wish to maximize their profit, so our goal
is to design truthful mechanisms. Our main result is that such auctions can
naturally be viewed and optimally solved as variants of multi-unit procurement
auctions. Based on this result, we derive auctions for two natural settings
which are optimal up to small constant factors:
1. In the setting in which the data analyst has a fixed accuracy goal, we
show that an application of the classic Vickrey auction achieves the analyst's
accuracy goal while minimizing his total payment.
2. In the setting in which the data analyst has a fixed budget, we give a
mechanism which maximizes the accuracy of the resulting estimate while
guaranteeing that the resulting sum payments do not exceed the analysts budget.
In both cases, our comparison class is the set of envy-free mechanisms, which
correspond to the natural class of fixed-price mechanisms in our setting.
In both of these results, we ignore the privacy cost due to possible
correlations between an individuals private data and his valuation for privacy
itself. We then show that generically, no individually rational mechanism can
compensate individuals for the privacy loss incurred due to their reported
valuations for privacy.Comment: Extended Abstract appeared in the proceedings of EC 201
Privacy Games: Optimal User-Centric Data Obfuscation
In this paper, we design user-centric obfuscation mechanisms that impose the
minimum utility loss for guaranteeing user's privacy. We optimize utility
subject to a joint guarantee of differential privacy (indistinguishability) and
distortion privacy (inference error). This double shield of protection limits
the information leakage through obfuscation mechanism as well as the posterior
inference. We show that the privacy achieved through joint
differential-distortion mechanisms against optimal attacks is as large as the
maximum privacy that can be achieved by either of these mechanisms separately.
Their utility cost is also not larger than what either of the differential or
distortion mechanisms imposes. We model the optimization problem as a
leader-follower game between the designer of obfuscation mechanism and the
potential adversary, and design adaptive mechanisms that anticipate and protect
against optimal inference algorithms. Thus, the obfuscation mechanism is
optimal against any inference algorithm
The Optimal Mechanism in Differential Privacy
We derive the optimal -differentially private mechanism for single
real-valued query function under a very general utility-maximization (or
cost-minimization) framework. The class of noise probability distributions in
the optimal mechanism has {\em staircase-shaped} probability density functions
which are symmetric (around the origin), monotonically decreasing and
geometrically decaying. The staircase mechanism can be viewed as a {\em
geometric mixture of uniform probability distributions}, providing a simple
algorithmic description for the mechanism. Furthermore, the staircase mechanism
naturally generalizes to discrete query output settings as well as more
abstract settings. We explicitly derive the optimal noise probability
distributions with minimum expectation of noise amplitude and power. Comparing
the optimal performances with those of the Laplacian mechanism, we show that in
the high privacy regime ( is small), Laplacian mechanism is
asymptotically optimal as ; in the low privacy regime
( is large), the minimum expectation of noise amplitude and minimum
noise power are and as , while the expectation of
noise amplitude and power using the Laplacian mechanism are
and , where is
the sensitivity of the query function. We conclude that the gains are more
pronounced in the low privacy regime.Comment: 40 pages, 5 figures. Part of this work was presented in DIMACS
Workshop on Recent Work on Differential Privacy across Computer Science,
October 24 - 26, 201
- …