4 research outputs found
Empirical Risk Minimization in the Non-interactive Local Model of Differential Privacy
In this paper, we study the Empirical Risk Minimization (ERM) problem in the
non-interactive Local Differential Privacy (LDP) model. Previous research on
this problem \citep{smith2017interaction} indicates that the sample complexity,
to achieve error , needs to be exponentially depending on the
dimensionality for general loss functions. In this paper, we make two
attempts to resolve this issue by investigating conditions on the loss
functions that allow us to remove such a limit. In our first attempt, we show
that if the loss function is -smooth, by using the Bernstein
polynomial approximation we can avoid the exponential dependency in the term of
. We then propose player-efficient algorithms with -bit
communication complexity and computation cost for each player. The error
bound of these algorithms is asymptotically the same as the original one. With
some additional assumptions, we also give an algorithm which is more efficient
for the server. In our second attempt, we show that for any -Lipschitz
generalized linear convex loss function, there is an -LDP
algorithm whose sample complexity for achieving error is only linear
in the dimensionality . Our results use a polynomial of inner product
approximation technique. Finally, motivated by the idea of using polynomial
approximation and based on different types of polynomial approximations, we
propose (efficient) non-interactive locally differentially private algorithms
for learning the set of k-way marginal queries and the set of smooth queries.Comment: Appeared at Journal of Machine Learning Research. The journal version
of arXiv:1802.04085, fixed a bug in arXiv:1812.0682