6 research outputs found
A Knowledge Transfer Framework for Differentially Private Sparse Learning
We study the problem of estimating high dimensional models with underlying
sparse structures while preserving the privacy of each training example. We
develop a differentially private high-dimensional sparse learning framework
using the idea of knowledge transfer. More specifically, we propose to distill
the knowledge from a "teacher" estimator trained on a private dataset, by
creating a new dataset from auxiliary features, and then train a differentially
private "student" estimator using this new dataset. In addition, we establish
the linear convergence rate as well as the utility guarantee for our proposed
method. For sparse linear regression and sparse logistic regression, our method
achieves improved utility guarantees compared with the best known results
(Kifer et al., 2012; Wang and Gu, 2019). We further demonstrate the superiority
of our framework through both synthetic and real-world data experiments.Comment: 24 pages, 2 figures, 3 table