34 research outputs found
Renyi Differential Privacy of Propose-Test-Release and Applications to Private and Robust Machine Learning
Propose-Test-Release (PTR) is a differential privacy framework that works
with local sensitivity of functions, instead of their global sensitivity. This
framework is typically used for releasing robust statistics such as median or
trimmed mean in a differentially private manner. While PTR is a common
framework introduced over a decade ago, using it in applications such as robust
SGD where we need many adaptive robust queries is challenging. This is mainly
due to the lack of Renyi Differential Privacy (RDP) analysis, an essential
ingredient underlying the moments accountant approach for differentially
private deep learning. In this work, we generalize the standard PTR and derive
the first RDP bound for it when the target function has bounded global
sensitivity. We show that our RDP bound for PTR yields tighter DP guarantees
than the directly analyzed (\eps, \delta)-DP. We also derive the
algorithm-specific privacy amplification bound of PTR under subsampling. We
show that our bound is much tighter than the general upper bound and close to
the lower bound. Our RDP bounds enable tighter privacy loss calculation for the
composition of many adaptive runs of PTR. As an application of our analysis, we
show that PTR and our theoretical results can be used to design differentially
private variants for byzantine robust training algorithms that use robust
statistics for gradients aggregation. We conduct experiments on the settings of
label, feature, and gradient corruption across different datasets and
architectures. We show that PTR-based private and robust training algorithm
significantly improves the utility compared with the baseline.Comment: NeurIPS 202