20,950 research outputs found

    The Geometry of Differential Privacy: the Sparse and Approximate Cases

    Full text link
    In this work, we study trade-offs between accuracy and privacy in the context of linear queries over histograms. This is a rich class of queries that includes contingency tables and range queries, and has been a focus of a long line of work. For a set of dd linear queries over a database x∈RNx \in \R^N, we seek to find the differentially private mechanism that has the minimum mean squared error. For pure differential privacy, an O(log⁑2d)O(\log^2 d) approximation to the optimal mechanism is known. Our first contribution is to give an O(log⁑2d)O(\log^2 d) approximation guarantee for the case of (\eps,\delta)-differential privacy. Our mechanism is simple, efficient and adds correlated Gaussian noise to the answers. We prove its approximation guarantee relative to the hereditary discrepancy lower bound of Muthukrishnan and Nikolov, using tools from convex geometry. We next consider this question in the case when the number of queries exceeds the number of individuals in the database, i.e. when d>nβ‰œβˆ₯xβˆ₯1d > n \triangleq \|x\|_1. It is known that better mechanisms exist in this setting. Our second main contribution is to give an (\eps,\delta)-differentially private mechanism which is optimal up to a \polylog(d,N) factor for any given query set AA and any given upper bound nn on βˆ₯xβˆ₯1\|x\|_1. This approximation is achieved by coupling the Gaussian noise addition approach with a linear regression step. We give an analogous result for the \eps-differential privacy setting. We also improve on the mean squared error upper bound for answering counting queries on a database of size nn by Blum, Ligett, and Roth, and match the lower bound implied by the work of Dinur and Nissim up to logarithmic factors. The connection between hereditary discrepancy and the privacy mechanism enables us to derive the first polylogarithmic approximation to the hereditary discrepancy of a matrix AA

    The fundamental limits of statistical data privacy

    Get PDF
    The Internet is shaping our daily lives. On the one hand, social networks like Facebook and Twitter allow people to share their precious moments and opinions with virtually anyone around the world. On the other, services like Google, Netflix, and Amazon allow people to look up information, watch movies, and shop online anytime, anywhere. However, with this unprecedented level of connectivity comes the danger of being monitored. There is an increasing tension between the need to share data and the need to preserve the privacy of Internet users. The need for privacy appears in three main contexts: (1) the global privacy context, as in when private companies and public institutions release personal information about individuals to the public; (2) the local privacy context, as in when individuals disclose their personal information with potentially malicious service providers; (3) the multi-party privacy context, as in when different parties cooperate to interactively compute a function that is defined over all the parties' data. Differential privacy has recently surfaced as a strong measure of privacy in all three contexts. Under differential privacy, privacy is achieved by randomizing the data before releasing it. This leads to a fundamental tradeoff between privacy and utility. In this thesis, we take a concrete step towards understanding the fundamental structure of privacy mechanisms that achieve the best privacy-utility tradeoff. This tradeoff is formulated as a constrained optimization problem: maximize utility subject to differential privacy constraints. We show, perhaps surprisingly, that in all three privacy contexts, the optimal privacy mechanisms have the same combinatorial staircase structure. This deep result is a direct consequence of the geometry of the constraints imposed by differential privacy on the privatization mechanisms

    Differentially Private Multivariate Statistics with an Application to Contingency Table Analysis

    Full text link
    Differential privacy (DP) has become a rigorous central concept in privacy protection for the past decade. Among various notions of DP, ff-DP is an easily interpretable and informative concept that tightly captures privacy level by comparing trade-off functions obtained from the hypothetical test of how well the mechanism recognizes individual information in the dataset. We adopt the Gaussian differential privacy (GDP), a canonical parametric family of ff-DP. The Gaussian mechanism is a natural and fundamental mechanism that tightly achieves GDP. However, the ordinary multivariate Gaussian mechanism is not optimal with respect to statistical utility. To improve the utility, we develop the rank-deficient and James-Stein Gaussian mechanisms for releasing private multivariate statistics based on the geometry of multivariate Gaussian distribution. We show that our proposals satisfy GDP and dominate the ordinary Gaussian mechanism with respect to L2L_2-cost. We also show that the Laplace mechanism, a prime mechanism in Ξ΅\varepsilon-DP framework, is sub-optimal than Gaussian-type mechanisms under the framework of GDP. For a fair comparison, we calibrate the Laplace mechanism to the global sensitivity of the statistic with the exact approach to the trade-off function. We also develop the optimal parameter for the Laplace mechanism when applied to contingency tables. Indeed, we show that the Gaussian-type mechanisms dominate the Laplace mechanism in contingency table analysis. In addition, we apply our findings to propose differentially private Ο‡2\chi^2-tests on contingency tables. Numerical results demonstrate that differentially private parametric bootstrap tests control the type I error rates and show higher power than other natural competitors

    Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning

    Full text link
    We study the fundamental problem of the construction of optimal randomization in Differential Privacy. Depending on the clipping strategy or additional properties of the processing function, the corresponding sensitivity set theoretically determines the necessary randomization to produce the required security parameters. Towards the optimal utility-privacy tradeoff, finding the minimal perturbation for properly-selected sensitivity sets stands as a central problem in DP research. In practice, l_2/l_1-norm clippings with Gaussian/Laplace noise mechanisms are among the most common setups. However, they also suffer from the curse of dimensionality. For more generic clipping strategies, the understanding of the optimal noise for a high-dimensional sensitivity set remains limited. In this paper, we revisit the geometry of high-dimensional sensitivity sets and present a series of results to characterize the non-asymptotically optimal Gaussian noise for R\'enyi DP (RDP). Our results are both negative and positive: on one hand, we show the curse of dimensionality is tight for a broad class of sensitivity sets satisfying certain symmetry properties; but if, fortunately, the representation of the sensitivity set is asymmetric on some group of orthogonal bases, we show the optimal noise bounds need not be explicitly dependent on either dimension or rank. We also revisit sampling in the high-dimensional scenario, which is the key for both privacy amplification and computation efficiency in large-scale data processing. We propose a novel method, termed twice sampling, which implements both sample-wise and coordinate-wise sampling, to enable Gaussian noises to fit the sensitivity geometry more closely. With closed-form RDP analysis, we prove twice sampling produces asymptotic improvement of the privacy amplification given an additional infinity-norm restriction, especially for small sampling rate
    • …
    corecore