155 research outputs found
Auditing: Active Learning with Outcome-Dependent Query Costs
We propose a learning setting in which unlabeled data is free, and the cost
of a label depends on its value, which is not known in advance. We study binary
classification in an extreme case, where the algorithm only pays for negative
labels. Our motivation are applications such as fraud detection, in which
investigating an honest transaction should be avoided if possible. We term the
setting auditing, and consider the auditing complexity of an algorithm: the
number of negative labels the algorithm requires in order to learn a hypothesis
with low relative error. We design auditing algorithms for simple hypothesis
classes (thresholds and rectangles), and show that with these algorithms, the
auditing complexity can be significantly lower than the active label
complexity. We also discuss a general competitive approach for auditing and
possible modifications to the framework.Comment: Corrections in section
Learning from Data with Heterogeneous Noise using SGD
We consider learning from data of variable quality that may be obtained from
different heterogeneous sources. Addressing learning from heterogeneous data in
its full generality is a challenging problem. In this paper, we adopt instead a
model in which data is observed through heterogeneous noise, where the noise
level reflects the quality of the data source. We study how to use stochastic
gradient algorithms to learn in this model. Our study is motivated by two
concrete examples where this problem arises naturally: learning with local
differential privacy based on data from multiple sources with different privacy
requirements, and learning from data with labels of variable quality.
The main contribution of this paper is to identify how heterogeneous noise
impacts performance. We show that given two datasets with heterogeneous noise,
the order in which to use them in standard SGD depends on the learning rate. We
propose a method for changing the learning rate as a function of the
heterogeneity, and prove new regret bounds for our method in two cases of
interest. Experiments on real data show that our method performs better than
using a single learning rate and using only the less noisy of the two datasets
when the noise level is low to moderate
Differentially Private Empirical Risk Minimization
Privacy-preserving machine learning algorithms are crucial for the
increasingly common setting in which personal data, such as medical or
financial records, are analyzed. We provide general techniques to produce
privacy-preserving approximations of classifiers learned via (regularized)
empirical risk minimization (ERM). These algorithms are private under the
-differential privacy definition due to Dwork et al. (2006). First we
apply the output perturbation ideas of Dwork et al. (2006), to ERM
classification. Then we propose a new method, objective perturbation, for
privacy-preserving machine learning algorithm design. This method entails
perturbing the objective function before optimizing over classifiers. If the
loss and regularizer satisfy certain convexity and differentiability criteria,
we prove theoretical results showing that our algorithms preserve privacy, and
provide generalization bounds for linear and nonlinear kernels. We further
present a privacy-preserving technique for tuning the parameters in general
machine learning algorithms, thereby providing end-to-end privacy guarantees
for the training process. We apply these results to produce privacy-preserving
analogues of regularized logistic regression and support vector machines. We
obtain encouraging results from evaluating their performance on real
demographic and benchmark data sets. Our results show that both theoretically
and empirically, objective perturbation is superior to the previous
state-of-the-art, output perturbation, in managing the inherent tradeoff
between privacy and learning performance.Comment: 40 pages, 7 figures, accepted to the Journal of Machine Learning
Researc
- …
