135 research outputs found
Gaussian Differential Privacy on Riemannian Manifolds
We develop an advanced approach for extending Gaussian Differential Privacy
(GDP) to general Riemannian manifolds. The concept of GDP stands out as a
prominent privacy definition that strongly warrants extension to manifold
settings, due to its central limit properties. By harnessing the power of the
renowned Bishop-Gromov theorem in geometric analysis, we propose a Riemannian
Gaussian distribution that integrates the Riemannian distance, allowing us to
achieve GDP in Riemannian manifolds with bounded Ricci curvature. To the best
of our knowledge, this work marks the first instance of extending the GDP
framework to accommodate general Riemannian manifolds, encompassing curved
spaces, and circumventing the reliance on tangent space summaries. We provide a
simple algorithm to evaluate the privacy budget on any one-dimensional
manifold and introduce a versatile Markov Chain Monte Carlo (MCMC)-based
algorithm to calculate on any Riemannian manifold with constant
curvature. Through simulations on one of the most prevalent manifolds in
statistics, the unit sphere , we demonstrate the superior utility of our
Riemannian Gaussian mechanism in comparison to the previously proposed
Riemannian Laplace mechanism for implementing GDP
Individual Privacy Accounting with Gaussian Differential Privacy
Individual privacy accounting enables bounding differential privacy (DP) loss
individually for each participant involved in the analysis. This can be
informative as often the individual privacy losses are considerably smaller
than those indicated by the DP bounds that are based on considering worst-case
bounds at each data access. In order to account for the individual privacy
losses in a principled manner, we need a privacy accountant for adaptive
compositions of randomised mechanisms, where the loss incurred at a given data
access is allowed to be smaller than the worst-case loss. This kind of analysis
has been carried out for the R\'enyi differential privacy (RDP) by Feldman and
Zrnic (2021), however not yet for the so-called optimal privacy accountants. We
make first steps in this direction by providing a careful analysis using the
Gaussian differential privacy which gives optimal bounds for the Gaussian
mechanism, one of the most versatile DP mechanisms. This approach is based on
determining a certain supermartingale for the hockey-stick divergence and on
extending the R\'enyi divergence-based fully adaptive composition results by
Feldman and Zrnic (2021). We also consider measuring the individual
-privacy losses using the so-called privacy loss
distributions. With the help of the Blackwell theorem, we can then make use of
the RDP analysis to construct an approximative individual
-accountant.Comment: 27 pages, 10 figure
Gaussian Differential Privacy And Related Techniques
Differential privacy has seen remarkable success as a rigorous and practical for-
malization of data privacy in the past decade. But it also has some well known
weaknesses, lacking comprehensible interpretation and an accessible and precise
toolkit. This is due to the inappropriate (ε, δ) parametrization and the frequent
approximation in the analysis. We overcome the difficulties by
1. relaxing the traditional (ε, δ) notion to the so-called f -differential privacy
from a decision theoretic viewpoint, hencing giving it strong interpretation,
and
2. with the relaxed notion, perform exact analysis without unnecessary approx-
imation.
Miraculously, with the relaxation and exact analysis, the theory is endowed with
various algebraic structures, and enjoys a central limit theorem. The central limit
theorem highlights the role of a specific family of DP notion called Gaussian Dif-
ferential Privacy. We demonstrate the use of the tools we develop by giving an
improved analysis of the privacy guarantees of noisy stochastic gradient descent
Tractable MCMC for Private Learning with Pure and Gaussian Differential Privacy
Posterior sampling, i.e., exponential mechanism to sample from the posterior
distribution, provides -pure differential privacy (DP) guarantees
and does not suffer from potentially unbounded privacy breach introduced by
-approximate DP. In practice, however, one needs to apply
approximate sampling methods such as Markov chain Monte Carlo (MCMC), thus
re-introducing the unappealing -approximation error into the privacy
guarantees. To bridge this gap, we propose the Approximate SAample Perturbation
(abbr. ASAP) algorithm which perturbs an MCMC sample with noise proportional to
its Wasserstein-infinity () distance from a reference distribution
that satisfies pure DP or pure Gaussian DP (i.e., ). We then leverage
a Metropolis-Hastings algorithm to generate the sample and prove that the
algorithm converges in W distance. We show that by combining our new
techniques with a careful localization step, we obtain the first nearly
linear-time algorithm that achieves the optimal rates in the DP-ERM problem
with strongly convex and smooth losses
Online Local Differential Private Quantile Inference via Self-normalization
Based on binary inquiries, we developed an algorithm to estimate population
quantiles under Local Differential Privacy (LDP). By self-normalizing, our
algorithm provides asymptotically normal estimation with valid inference,
resulting in tight confidence intervals without the need for nuisance
parameters to be estimated. Our proposed method can be conducted fully online,
leading to high computational efficiency and minimal storage requirements with
space. We also proved an optimality result by an elegant
application of one central limit theorem of Gaussian Differential Privacy (GDP)
when targeting the frequently encountered median estimation problem. With
mathematical proof and extensive numerical testing, we demonstrate the validity
of our algorithm both theoretically and experimentally
- …