5,284 research outputs found

    The Optimal Mechanism in Differential Privacy

    Full text link
    We derive the optimal ϵ\epsilon-differentially private mechanism for single real-valued query function under a very general utility-maximization (or cost-minimization) framework. The class of noise probability distributions in the optimal mechanism has {\em staircase-shaped} probability density functions which are symmetric (around the origin), monotonically decreasing and geometrically decaying. The staircase mechanism can be viewed as a {\em geometric mixture of uniform probability distributions}, providing a simple algorithmic description for the mechanism. Furthermore, the staircase mechanism naturally generalizes to discrete query output settings as well as more abstract settings. We explicitly derive the optimal noise probability distributions with minimum expectation of noise amplitude and power. Comparing the optimal performances with those of the Laplacian mechanism, we show that in the high privacy regime (ϵ\epsilon is small), Laplacian mechanism is asymptotically optimal as ϵ0\epsilon \to 0; in the low privacy regime (ϵ\epsilon is large), the minimum expectation of noise amplitude and minimum noise power are Θ(Δeϵ2)\Theta(\Delta e^{-\frac{\epsilon}{2}}) and Θ(Δ2e2ϵ3)\Theta(\Delta^2 e^{-\frac{2\epsilon}{3}}) as ϵ+\epsilon \to +\infty, while the expectation of noise amplitude and power using the Laplacian mechanism are Δϵ\frac{\Delta}{\epsilon} and 2Δ2ϵ2\frac{2\Delta^2}{\epsilon^2}, where Δ\Delta is the sensitivity of the query function. We conclude that the gains are more pronounced in the low privacy regime.Comment: 40 pages, 5 figures. Part of this work was presented in DIMACS Workshop on Recent Work on Differential Privacy across Computer Science, October 24 - 26, 201

    Algorithms for Differentially Private Multi-Armed Bandits

    Get PDF
    We present differentially private algorithms for the stochastic Multi-Armed Bandit (MAB) problem. This is a problem for applications such as adaptive clinical trials, experiment design, and user-targeted advertising where private information is connected to individual rewards. Our major contribution is to show that there exist (ϵ,δ)(\epsilon, \delta) differentially private variants of Upper Confidence Bound algorithms which have optimal regret, O(ϵ1+logT)O(\epsilon^{-1} + \log T). This is a significant improvement over previous results, which only achieve poly-log regret O(ϵ2log2T)O(\epsilon^{-2} \log^{2} T), because of our use of a novel interval-based mechanism. We also substantially improve the bounds of previous family of algorithms which use a continual release mechanism. Experiments clearly validate our theoretical bounds

    Context-Aware Generative Adversarial Privacy

    Full text link
    Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals' private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP's performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model, and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.Comment: Improved version of a paper accepted by Entropy Journal, Special Issue on Information Theory in Machine Learning and Data Scienc
    corecore