525 research outputs found
Wishart Mechanism for Differentially Private Principal Components Analysis
We propose a new input perturbation mechanism for publishing a covariance
matrix to achieve -differential privacy. Our mechanism uses a
Wishart distribution to generate matrix noise. In particular, We apply this
mechanism to principal component analysis. Our mechanism is able to keep the
positive semi-definiteness of the published covariance matrix. Thus, our
approach gives rise to a general publishing framework for input perturbation of
a symmetric positive semidefinite matrix. Moreover, compared with the classic
Laplace mechanism, our method has better utility guarantee. To the best of our
knowledge, Wishart mechanism is the best input perturbation approach for
-differentially private PCA. We also compare our work with
previous exponential mechanism algorithms in the literature and provide near
optimal bound while having more flexibility and less computational
intractability.Comment: A full version with technical proofs. Accepted to AAAI-1
Detecting Communities under Differential Privacy
Complex networks usually expose community structure with groups of nodes
sharing many links with the other nodes in the same group and relatively few
with the nodes of the rest. This feature captures valuable information about
the organization and even the evolution of the network. Over the last decade, a
great number of algorithms for community detection have been proposed to deal
with the increasingly complex networks. However, the problem of doing this in a
private manner is rarely considered. In this paper, we solve this problem under
differential privacy, a prominent privacy concept for releasing private data.
We analyze the major challenges behind the problem and propose several schemes
to tackle them from two perspectives: input perturbation and algorithm
perturbation. We choose Louvain method as the back-end community detection for
input perturbation schemes and propose the method LouvainDP which runs Louvain
algorithm on a noisy super-graph. For algorithm perturbation, we design
ModDivisive using exponential mechanism with the modularity as the score. We
have thoroughly evaluated our techniques on real graphs of different sizes and
verified their outperformance over the state-of-the-art
MVG Mechanism: Differential Privacy under Matrix-Valued Query
Differential privacy mechanism design has traditionally been tailored for a
scalar-valued query function. Although many mechanisms such as the Laplace and
Gaussian mechanisms can be extended to a matrix-valued query function by adding
i.i.d. noise to each element of the matrix, this method is often suboptimal as
it forfeits an opportunity to exploit the structural characteristics typically
associated with matrix analysis. To address this challenge, we propose a novel
differential privacy mechanism called the Matrix-Variate Gaussian (MVG)
mechanism, which adds a matrix-valued noise drawn from a matrix-variate
Gaussian distribution, and we rigorously prove that the MVG mechanism preserves
-differential privacy. Furthermore, we introduce the concept
of directional noise made possible by the design of the MVG mechanism.
Directional noise allows the impact of the noise on the utility of the
matrix-valued query function to be moderated. Finally, we experimentally
demonstrate the performance of our mechanism using three matrix-valued queries
on three privacy-sensitive datasets. We find that the MVG mechanism notably
outperforms four previous state-of-the-art approaches, and provides comparable
utility to the non-private baseline.Comment: Appeared in CCS'1
Efficient Private ERM for Smooth Objectives
In this paper, we consider efficient differentially private empirical risk
minimization from the viewpoint of optimization algorithms. For strongly convex
and smooth objectives, we prove that gradient descent with output perturbation
not only achieves nearly optimal utility, but also significantly improves the
running time of previous state-of-the-art private optimization algorithms, for
both -DP and -DP. For non-convex but smooth
objectives, we propose an RRPSGD (Random Round Private Stochastic Gradient
Descent) algorithm, which provably converges to a stationary point with privacy
guarantee. Besides the expected utility bounds, we also provide guarantees in
high probability form. Experiments demonstrate that our algorithm consistently
outperforms existing method in both utility and running time
- …