308 research outputs found
Inferential Privacy Guarantees for Differentially Private Mechanisms
The correlations and network structure amongst individuals in datasets
today---whether explicitly articulated, or deduced from biological or
behavioral connections---pose new issues around privacy guarantees, because of
inferences that can be made about one individual from another's data. This
motivates quantifying privacy in networked contexts in terms of "inferential
privacy"---which measures the change in beliefs about an individual's data from
the result of a computation---as originally proposed by Dalenius in the 1970's.
Inferential privacy is implied by differential privacy when data are
independent, but can be much worse when data are correlated; indeed, simple
examples, as well as a general impossibility theorem of Dwork and Naor,
preclude the possibility of achieving non-trivial inferential privacy when the
adversary can have arbitrary auxiliary information. In this paper, we ask how
differential privacy guarantees translate to guarantees on inferential privacy
in networked contexts: specifically, under what limitations on the adversary's
information about correlations, modeled as a prior distribution over datasets,
can we deduce an inferential guarantee from a differential one?
We prove two main results. The first result pertains to distributions that
satisfy a natural positive-affiliation condition, and gives an upper bound on
the inferential privacy guarantee for any differentially private mechanism.
This upper bound is matched by a simple mechanism that adds Laplace noise to
the sum of the data. The second result pertains to distributions that have weak
correlations, defined in terms of a suitable "influence matrix". The result
provides an upper bound for inferential privacy in terms of the differential
privacy parameter and the spectral norm of this matrix
Truthful Assignment without Money
We study the design of truthful mechanisms that do not use payments for the
generalized assignment problem (GAP) and its variants. An instance of the GAP
consists of a bipartite graph with jobs on one side and machines on the other.
Machines have capacities and edges have values and sizes; the goal is to
construct a welfare maximizing feasible assignment. In our model of private
valuations, motivated by impossibility results, the value and sizes on all
job-machine pairs are public information; however, whether an edge exists or
not in the bipartite graph is a job's private information.
We study several variants of the GAP starting with matching. For the
unweighted version, we give an optimal strategyproof mechanism; for maximum
weight bipartite matching, however, we show give a 2-approximate strategyproof
mechanism and show by a matching lowerbound that this is optimal. Next we study
knapsack-like problems, which are APX-hard. For these problems, we develop a
general LP-based technique that extends the ideas of Lavi and Swamy to reduce
designing a truthful mechanism without money to designing such a mechanism for
the fractional version of the problem, at a loss of a factor equal to the
integrality gap in the approximation ratio. We use this technique to obtain
strategyproof mechanisms with constant approximation ratios for these problems.
We then design an O(log n)-approximate strategyproof mechanism for the GAP by
reducing, with logarithmic loss in the approximation, to our solution for the
value-invariant GAP. Our technique may be of independent interest for designing
truthful mechanisms without money for other LP-based problems.Comment: Extended abstract appears in the 11th ACM Conference on Electronic
Commerce (EC), 201
Selling Privacy at Auction
We initiate the study of markets for private data, though the lens of
differential privacy. Although the purchase and sale of private data has
already begun on a large scale, a theory of privacy as a commodity is missing.
In this paper, we propose to build such a theory. Specifically, we consider a
setting in which a data analyst wishes to buy information from a population
from which he can estimate some statistic. The analyst wishes to obtain an
accurate estimate cheaply. On the other hand, the owners of the private data
experience some cost for their loss of privacy, and must be compensated for
this loss. Agents are selfish, and wish to maximize their profit, so our goal
is to design truthful mechanisms. Our main result is that such auctions can
naturally be viewed and optimally solved as variants of multi-unit procurement
auctions. Based on this result, we derive auctions for two natural settings
which are optimal up to small constant factors:
1. In the setting in which the data analyst has a fixed accuracy goal, we
show that an application of the classic Vickrey auction achieves the analyst's
accuracy goal while minimizing his total payment.
2. In the setting in which the data analyst has a fixed budget, we give a
mechanism which maximizes the accuracy of the resulting estimate while
guaranteeing that the resulting sum payments do not exceed the analysts budget.
In both cases, our comparison class is the set of envy-free mechanisms, which
correspond to the natural class of fixed-price mechanisms in our setting.
In both of these results, we ignore the privacy cost due to possible
correlations between an individuals private data and his valuation for privacy
itself. We then show that generically, no individually rational mechanism can
compensate individuals for the privacy loss incurred due to their reported
valuations for privacy.Comment: Extended Abstract appeared in the proceedings of EC 201
Behavioral Mechanism Design: Optimal Contests for Simple Agents
Incentives are more likely to elicit desired outcomes when they are designed
based on accurate models of agents' strategic behavior. A growing literature,
however, suggests that people do not quite behave like standard economic agents
in a variety of environments, both online and offline. What consequences might
such differences have for the optimal design of mechanisms in these
environments? In this paper, we explore this question in the context of optimal
contest design for simple agents---agents who strategically reason about
whether or not to participate in a system, but not about the input they provide
to it. Specifically, consider a contest where potential contestants with
types each choose between participating and producing a submission
of quality at cost , versus not participating at all, to maximize
their utilities. How should a principal distribute a total prize amongst
the ranks to maximize some increasing function of the qualities of elicited
submissions in a contest with such simple agents?
We first solve the optimal contest design problem for settings with
homogenous participation costs . Here, the optimal contest is always a
simple contest, awarding equal prizes to the top contestants for a
suitable choice of . (In comparable models with strategic effort choices,
the optimal contest is either a winner-take-all contest or awards possibly
unequal prizes, depending on the curvature of agents' effort cost functions.)
We next address the general case with heterogeneous costs where agents' types
are inherently two-dimensional, significantly complicating equilibrium
analysis. Our main result here is that the winner-take-all contest is a
3-approximation of the optimal contest when the principal's objective is to
maximize the quality of the best elicited contribution.Comment: This is the full version of a paper in the ACM Conference on
Economics and Computation (ACM-EC), 201
- β¦