1,882 research outputs found
Large Margin Multiclass Gaussian Classification with Differential Privacy
As increasing amounts of sensitive personal information is aggregated into
data repositories, it has become important to develop mechanisms for processing
the data without revealing information about individual data instances. The
differential privacy model provides a framework for the development and
theoretical analysis of such mechanisms. In this paper, we propose an algorithm
for learning a discriminatively trained multi-class Gaussian classifier that
satisfies differential privacy using a large margin loss function with a
perturbed regularization term. We present a theoretical upper bound on the
excess risk of the classifier introduced by the perturbation.Comment: 14 page
Differentially Private Empirical Risk Minimization
Privacy-preserving machine learning algorithms are crucial for the
increasingly common setting in which personal data, such as medical or
financial records, are analyzed. We provide general techniques to produce
privacy-preserving approximations of classifiers learned via (regularized)
empirical risk minimization (ERM). These algorithms are private under the
-differential privacy definition due to Dwork et al. (2006). First we
apply the output perturbation ideas of Dwork et al. (2006), to ERM
classification. Then we propose a new method, objective perturbation, for
privacy-preserving machine learning algorithm design. This method entails
perturbing the objective function before optimizing over classifiers. If the
loss and regularizer satisfy certain convexity and differentiability criteria,
we prove theoretical results showing that our algorithms preserve privacy, and
provide generalization bounds for linear and nonlinear kernels. We further
present a privacy-preserving technique for tuning the parameters in general
machine learning algorithms, thereby providing end-to-end privacy guarantees
for the training process. We apply these results to produce privacy-preserving
analogues of regularized logistic regression and support vector machines. We
obtain encouraging results from evaluating their performance on real
demographic and benchmark data sets. Our results show that both theoretically
and empirically, objective perturbation is superior to the previous
state-of-the-art, output perturbation, in managing the inherent tradeoff
between privacy and learning performance.Comment: 40 pages, 7 figures, accepted to the Journal of Machine Learning
Researc
Distributed Kernel Regression: An Algorithm for Training Collaboratively
This paper addresses the problem of distributed learning under communication
constraints, motivated by distributed signal processing in wireless sensor
networks and data mining with distributed databases. After formalizing a
general model for distributed learning, an algorithm for collaboratively
training regularized kernel least-squares regression estimators is derived.
Noting that the algorithm can be viewed as an application of successive
orthogonal projection algorithms, its convergence properties are investigated
and the statistical behavior of the estimator is discussed in a simplified
theoretical setting.Comment: To be presented at the 2006 IEEE Information Theory Workshop, Punta
del Este, Uruguay, March 13-17, 200
- …