37 research outputs found
Non-Euclidean Differentially Private Stochastic Convex Optimization
Differentially private (DP) stochastic convex optimization (SCO) is a
fundamental problem, where the goal is to approximately minimize the population
risk with respect to a convex loss function, given a dataset of i.i.d. samples
from a distribution, while satisfying differential privacy with respect to the
dataset. Most of the existing works in the literature of private convex
optimization focus on the Euclidean (i.e., ) setting, where the loss is
assumed to be Lipschitz (and possibly smooth) w.r.t. the norm over a
constraint set with bounded diameter. Algorithms based on noisy
stochastic gradient descent (SGD) are known to attain the optimal excess risk
in this setting.
In this work, we conduct a systematic study of DP-SCO for -setups.
For , under a standard smoothness assumption, we give a new algorithm with
nearly optimal excess risk. This result also extends to general polyhedral
norms and feasible sets. For , we give two new algorithms, whose
central building block is a novel privacy mechanism, which generalizes the
Gaussian mechanism. Moreover, we establish a lower bound on the excess risk for
this range of , showing a necessary dependence on , where is
the dimension of the space. Our lower bound implies a sudden transition of the
excess risk at , where the dependence on changes from logarithmic to
polynomial, resolving an open question in prior work [TTZ15] . For , noisy SGD attains optimal excess risk in the low-dimensional regime;
in particular, this proves the optimality of noisy SGD for . Our work
draws upon concepts from the geometry of normed spaces, such as the notions of
regularity, uniform convexity, and uniform smoothness