5,284 research outputs found
The Optimal Mechanism in Differential Privacy
We derive the optimal -differentially private mechanism for single
real-valued query function under a very general utility-maximization (or
cost-minimization) framework. The class of noise probability distributions in
the optimal mechanism has {\em staircase-shaped} probability density functions
which are symmetric (around the origin), monotonically decreasing and
geometrically decaying. The staircase mechanism can be viewed as a {\em
geometric mixture of uniform probability distributions}, providing a simple
algorithmic description for the mechanism. Furthermore, the staircase mechanism
naturally generalizes to discrete query output settings as well as more
abstract settings. We explicitly derive the optimal noise probability
distributions with minimum expectation of noise amplitude and power. Comparing
the optimal performances with those of the Laplacian mechanism, we show that in
the high privacy regime ( is small), Laplacian mechanism is
asymptotically optimal as ; in the low privacy regime
( is large), the minimum expectation of noise amplitude and minimum
noise power are and as , while the expectation of
noise amplitude and power using the Laplacian mechanism are
and , where is
the sensitivity of the query function. We conclude that the gains are more
pronounced in the low privacy regime.Comment: 40 pages, 5 figures. Part of this work was presented in DIMACS
Workshop on Recent Work on Differential Privacy across Computer Science,
October 24 - 26, 201
Algorithms for Differentially Private Multi-Armed Bandits
We present differentially private algorithms for the stochastic Multi-Armed
Bandit (MAB) problem. This is a problem for applications such as adaptive
clinical trials, experiment design, and user-targeted advertising where private
information is connected to individual rewards. Our major contribution is to
show that there exist differentially private variants of
Upper Confidence Bound algorithms which have optimal regret, . This is a significant improvement over previous results, which only
achieve poly-log regret , because of our use of a
novel interval-based mechanism. We also substantially improve the bounds of
previous family of algorithms which use a continual release mechanism.
Experiments clearly validate our theoretical bounds
Context-Aware Generative Adversarial Privacy
Preserving the utility of published datasets while simultaneously providing
provable privacy guarantees is a well-known challenge. On the one hand,
context-free privacy solutions, such as differential privacy, provide strong
privacy guarantees, but often lead to a significant reduction in utility. On
the other hand, context-aware privacy solutions, such as information theoretic
privacy, achieve an improved privacy-utility tradeoff, but assume that the data
holder has access to dataset statistics. We circumvent these limitations by
introducing a novel context-aware privacy framework called generative
adversarial privacy (GAP). GAP leverages recent advancements in generative
adversarial networks (GANs) to allow the data holder to learn privatization
schemes from the dataset itself. Under GAP, learning the privacy mechanism is
formulated as a constrained minimax game between two players: a privatizer that
sanitizes the dataset in a way that limits the risk of inference attacks on the
individuals' private variables, and an adversary that tries to infer the
private variables from the sanitized dataset. To evaluate GAP's performance, we
investigate two simple (yet canonical) statistical dataset models: (a) the
binary data model, and (b) the binary Gaussian mixture model. For both models,
we derive game-theoretically optimal minimax privacy mechanisms, and show that
the privacy mechanisms learned from data (in a generative adversarial fashion)
match the theoretically optimal ones. This demonstrates that our framework can
be easily applied in practice, even in the absence of dataset statistics.Comment: Improved version of a paper accepted by Entropy Journal, Special
Issue on Information Theory in Machine Learning and Data Scienc
- …