18,535 research outputs found
Differentially Private Multivariate Statistics with an Application to Contingency Table Analysis
Differential privacy (DP) has become a rigorous central concept in privacy
protection for the past decade. Among various notions of DP, -DP is an
easily interpretable and informative concept that tightly captures privacy
level by comparing trade-off functions obtained from the hypothetical test of
how well the mechanism recognizes individual information in the dataset. We
adopt the Gaussian differential privacy (GDP), a canonical parametric family of
-DP. The Gaussian mechanism is a natural and fundamental mechanism that
tightly achieves GDP. However, the ordinary multivariate Gaussian mechanism is
not optimal with respect to statistical utility. To improve the utility, we
develop the rank-deficient and James-Stein Gaussian mechanisms for releasing
private multivariate statistics based on the geometry of multivariate Gaussian
distribution. We show that our proposals satisfy GDP and dominate the ordinary
Gaussian mechanism with respect to -cost. We also show that the Laplace
mechanism, a prime mechanism in -DP framework, is sub-optimal than
Gaussian-type mechanisms under the framework of GDP. For a fair comparison, we
calibrate the Laplace mechanism to the global sensitivity of the statistic with
the exact approach to the trade-off function. We also develop the optimal
parameter for the Laplace mechanism when applied to contingency tables. Indeed,
we show that the Gaussian-type mechanisms dominate the Laplace mechanism in
contingency table analysis. In addition, we apply our findings to propose
differentially private -tests on contingency tables. Numerical results
demonstrate that differentially private parametric bootstrap tests control the
type I error rates and show higher power than other natural competitors
The fundamental limits of statistical data privacy
The Internet is shaping our daily lives. On the one hand, social networks like Facebook and Twitter allow people to share their precious moments and opinions with virtually anyone around the world. On the other, services like Google, Netflix, and Amazon allow people to look up information, watch movies, and shop online anytime, anywhere. However, with this unprecedented level of connectivity comes the danger of being monitored. There is an increasing tension between the need to share data and the need to preserve the privacy of Internet users. The need for privacy appears in three main contexts: (1) the global privacy context, as in when private companies and public institutions release personal information about individuals to the public; (2) the local privacy context, as in when individuals disclose their personal information with potentially malicious service providers; (3) the multi-party privacy context, as in when different parties cooperate to interactively compute a function that is defined over all the parties' data.
Differential privacy has recently surfaced as a strong measure of privacy in all three contexts. Under differential privacy, privacy is achieved by randomizing the data before releasing it. This leads to a fundamental tradeoff between privacy and utility. In this thesis, we take a concrete step towards understanding the fundamental structure of privacy mechanisms that achieve the best privacy-utility tradeoff. This tradeoff is formulated as a constrained optimization problem: maximize utility subject to differential privacy constraints. We show, perhaps surprisingly, that in all three privacy contexts, the optimal privacy mechanisms have the same combinatorial staircase structure. This deep result is a direct consequence of the geometry of the constraints imposed by differential privacy on the privatization mechanisms
On the Measurement of Privacy as an Attacker's Estimation Error
A wide variety of privacy metrics have been proposed in the literature to
evaluate the level of protection offered by privacy enhancing-technologies.
Most of these metrics are specific to concrete systems and adversarial models,
and are difficult to generalize or translate to other contexts. Furthermore, a
better understanding of the relationships between the different privacy metrics
is needed to enable more grounded and systematic approach to measuring privacy,
as well as to assist systems designers in selecting the most appropriate metric
for a given application.
In this work we propose a theoretical framework for privacy-preserving
systems, endowed with a general definition of privacy in terms of the
estimation error incurred by an attacker who aims to disclose the private
information that the system is designed to conceal. We show that our framework
permits interpreting and comparing a number of well-known metrics under a
common perspective. The arguments behind these interpretations are based on
fundamental results related to the theories of information, probability and
Bayes decision.Comment: This paper has 18 pages and 17 figure
Efficient Private ERM for Smooth Objectives
In this paper, we consider efficient differentially private empirical risk
minimization from the viewpoint of optimization algorithms. For strongly convex
and smooth objectives, we prove that gradient descent with output perturbation
not only achieves nearly optimal utility, but also significantly improves the
running time of previous state-of-the-art private optimization algorithms, for
both -DP and -DP. For non-convex but smooth
objectives, we propose an RRPSGD (Random Round Private Stochastic Gradient
Descent) algorithm, which provably converges to a stationary point with privacy
guarantee. Besides the expected utility bounds, we also provide guarantees in
high probability form. Experiments demonstrate that our algorithm consistently
outperforms existing method in both utility and running time
Privacy-Compatibility For General Utility Metrics
In this note, we present a complete characterization of the utility metrics
that allow for non-trivial differential privacy guarantees
- …