35 research outputs found
The Empirical Implications of Privacy-Aware Choice
This paper initiates the study of the testable implications of choice data in
settings where agents have privacy preferences. We adapt the standard
conceptualization of consumer choice theory to a situation where the consumer
is aware of, and has preferences over, the information revealed by her choices.
The main message of the paper is that little can be inferred about consumers'
preferences once we introduce the possibility that the consumer has concerns
about privacy. This holds even when consumers' privacy preferences are assumed
to be monotonic and separable. This motivates the consideration of stronger
assumptions and, to that end, we introduce an additive model for privacy
preferences that does have testable implications
Redrawing the Boundaries on Purchasing Data from Privacy-Sensitive Individuals
We prove new positive and negative results concerning the existence of
truthful and individually rational mechanisms for purchasing private data from
individuals with unbounded and sensitive privacy preferences. We strengthen the
impossibility results of Ghosh and Roth (EC 2011) by extending it to a much
wider class of privacy valuations. In particular, these include privacy
valuations that are based on ({\epsilon}, {\delta})-differentially private
mechanisms for non-zero {\delta}, ones where the privacy costs are measured in
a per-database manner (rather than taking the worst case), and ones that do not
depend on the payments made to players (which might not be observable to an
adversary). To bypass this impossibility result, we study a natural special
setting where individuals have mono- tonic privacy valuations, which captures
common contexts where certain values for private data are expected to lead to
higher valuations for privacy (e.g. having a particular disease). We give new
mech- anisms that are individually rational for all players with monotonic
privacy valuations, truthful for all players whose privacy valuations are not
too large, and accurate if there are not too many players with too-large
privacy valuations. We also prove matching lower bounds showing that in some
respects our mechanism cannot be improved significantly
Selling Privacy at Auction
We initiate the study of markets for private data, though the lens of
differential privacy. Although the purchase and sale of private data has
already begun on a large scale, a theory of privacy as a commodity is missing.
In this paper, we propose to build such a theory. Specifically, we consider a
setting in which a data analyst wishes to buy information from a population
from which he can estimate some statistic. The analyst wishes to obtain an
accurate estimate cheaply. On the other hand, the owners of the private data
experience some cost for their loss of privacy, and must be compensated for
this loss. Agents are selfish, and wish to maximize their profit, so our goal
is to design truthful mechanisms. Our main result is that such auctions can
naturally be viewed and optimally solved as variants of multi-unit procurement
auctions. Based on this result, we derive auctions for two natural settings
which are optimal up to small constant factors:
1. In the setting in which the data analyst has a fixed accuracy goal, we
show that an application of the classic Vickrey auction achieves the analyst's
accuracy goal while minimizing his total payment.
2. In the setting in which the data analyst has a fixed budget, we give a
mechanism which maximizes the accuracy of the resulting estimate while
guaranteeing that the resulting sum payments do not exceed the analysts budget.
In both cases, our comparison class is the set of envy-free mechanisms, which
correspond to the natural class of fixed-price mechanisms in our setting.
In both of these results, we ignore the privacy cost due to possible
correlations between an individuals private data and his valuation for privacy
itself. We then show that generically, no individually rational mechanism can
compensate individuals for the privacy loss incurred due to their reported
valuations for privacy.Comment: Extended Abstract appeared in the proceedings of EC 201
The Strange Case of Privacy in Equilibrium Models
We study how privacy technologies affect user and advertiser behavior in a
simple economic model of targeted advertising. In our model, a consumer first
decides whether or not to buy a good, and then an advertiser chooses an
advertisement to show the consumer. The consumer's value for the good is
correlated with her type, which determines which ad the advertiser would prefer
to show to her---and hence, the advertiser would like to use information about
the consumer's purchase decision to target the ad that he shows.
In our model, the advertiser is given only a differentially private signal
about the consumer's behavior---which can range from no signal at all to a
perfect signal, as we vary the differential privacy parameter. This allows us
to study equilibrium behavior as a function of the level of privacy provided to
the consumer. We show that this behavior can be highly counter-intuitive, and
that the effect of adding privacy in equilibrium can be completely different
from what we would expect if we ignored equilibrium incentives. Specifically,
we show that increasing the level of privacy can actually increase the amount
of information about the consumer's type contained in the signal the advertiser
receives, lead to decreased utility for the consumer, and increased profit for
the advertiser, and that generally these quantities can be non-monotonic and
even discontinuous in the privacy level of the signal
Truthful Mechanisms for Agents that Value Privacy
Recent work has constructed economic mechanisms that are both truthful and
differentially private. In these mechanisms, privacy is treated separately from
the truthfulness; it is not incorporated in players' utility functions (and
doing so has been shown to lead to non-truthfulness in some cases). In this
work, we propose a new, general way of modelling privacy in players' utility
functions. Specifically, we only assume that if an outcome has the property
that any report of player would have led to with approximately the same
probability, then has small privacy cost to player . We give three
mechanisms that are truthful with respect to our modelling of privacy: for an
election between two candidates, for a discrete version of the facility
location problem, and for a general social choice problem with discrete
utilities (via a VCG-like mechanism). As the number of players increases,
the social welfare achieved by our mechanisms approaches optimal (as a fraction
of )