732 research outputs found
Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization
Data-driven algorithm design, that is, choosing the best algorithm for a
specific application, is a crucial problem in modern data science.
Practitioners often optimize over a parameterized algorithm family, tuning
parameters based on problems from their domain. These procedures have
historically come with no guarantees, though a recent line of work studies
algorithm selection from a theoretical perspective. We advance the foundations
of this field in several directions: we analyze online algorithm selection,
where problems arrive one-by-one and the goal is to minimize regret, and
private algorithm selection, where the goal is to find good parameters over a
set of problems without revealing sensitive information contained therein. We
study important algorithm families, including SDP-rounding schemes for problems
formulated as integer quadratic programs, and greedy techniques for canonical
subset selection problems. In these cases, the algorithm's performance is a
volatile and piecewise Lipschitz function of its parameters, since tweaking the
parameters can completely change the algorithm's behavior. We give a sufficient
and general condition, dispersion, defining a family of piecewise Lipschitz
functions that can be optimized online and privately, which includes the
functions measuring the performance of the algorithms we study. Intuitively, a
set of piecewise Lipschitz functions is dispersed if no small region contains
many of the functions' discontinuities. We present general techniques for
online and private optimization of the sum of dispersed piecewise Lipschitz
functions. We improve over the best-known regret bounds for a variety of
problems, prove regret bounds for problems not previously studied, and give
matching lower bounds. We also give matching upper and lower bounds on the
utility loss due to privacy. Moreover, we uncover dispersion in auction design
and pricing problems
Exploiting Metric Structure for Efficient Private Query Release
We consider the problem of privately answering queries defined on databases
which are collections of points belonging to some metric space. We give simple,
computationally efficient algorithms for answering distance queries defined
over an arbitrary metric. Distance queries are specified by points in the
metric space, and ask for the average distance from the query point to the
points contained in the database, according to the specified metric. Our
algorithms run efficiently in the database size and the dimension of the space,
and operate in both the online query release setting, and the offline setting
in which they must in polynomial time generate a fixed data structure which can
answer all queries of interest. This represents one of the first subclasses of
linear queries for which efficient algorithms are known for the private query
release problem, circumventing known hardness results for generic linear
queries
Differentially Private Empirical Risk Minimization
Privacy-preserving machine learning algorithms are crucial for the
increasingly common setting in which personal data, such as medical or
financial records, are analyzed. We provide general techniques to produce
privacy-preserving approximations of classifiers learned via (regularized)
empirical risk minimization (ERM). These algorithms are private under the
-differential privacy definition due to Dwork et al. (2006). First we
apply the output perturbation ideas of Dwork et al. (2006), to ERM
classification. Then we propose a new method, objective perturbation, for
privacy-preserving machine learning algorithm design. This method entails
perturbing the objective function before optimizing over classifiers. If the
loss and regularizer satisfy certain convexity and differentiability criteria,
we prove theoretical results showing that our algorithms preserve privacy, and
provide generalization bounds for linear and nonlinear kernels. We further
present a privacy-preserving technique for tuning the parameters in general
machine learning algorithms, thereby providing end-to-end privacy guarantees
for the training process. We apply these results to produce privacy-preserving
analogues of regularized logistic regression and support vector machines. We
obtain encouraging results from evaluating their performance on real
demographic and benchmark data sets. Our results show that both theoretically
and empirically, objective perturbation is superior to the previous
state-of-the-art, output perturbation, in managing the inherent tradeoff
between privacy and learning performance.Comment: 40 pages, 7 figures, accepted to the Journal of Machine Learning
Researc
Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems
Crowdsourcing markets have emerged as a popular platform for matching
available workers with tasks to complete. The payment for a particular task is
typically set by the task's requester, and may be adjusted based on the quality
of the completed work, for example, through the use of "bonus" payments. In
this paper, we study the requester's problem of dynamically adjusting
quality-contingent payments for tasks. We consider a multi-round version of the
well-known principal-agent model, whereby in each round a worker makes a
strategic choice of the effort level which is not directly observable by the
requester. In particular, our formulation significantly generalizes the
budget-free online task pricing problems studied in prior work.
We treat this problem as a multi-armed bandit problem, with each "arm"
representing a potential contract. To cope with the large (and in fact,
infinite) number of arms, we propose a new algorithm, AgnosticZooming, which
discretizes the contract space into a finite number of regions, effectively
treating each region as a single arm. This discretization is adaptively
refined, so that more promising regions of the contract space are eventually
discretized more finely. We analyze this algorithm, showing that it achieves
regret sublinear in the time horizon and substantially improves over
non-adaptive discretization (which is the only competing approach in the
literature).
Our results advance the state of art on several different topics: the theory
of crowdsourcing markets, principal-agent problems, multi-armed bandits, and
dynamic pricing.Comment: This is the full version of a paper in the ACM Conference on
Economics and Computation (ACM-EC), 201
Learning to Optimize Computational Resources: Frugal Training with Generalization Guarantees
Algorithms typically come with tunable parameters that have a considerable
impact on the computational resources they consume. Too often, practitioners
must hand-tune the parameters, a tedious and error-prone task. A recent line of
research provides algorithms that return nearly-optimal parameters from within
a finite set. These algorithms can be used when the parameter space is infinite
by providing as input a random sample of parameters. This data-independent
discretization, however, might miss pockets of nearly-optimal parameters: prior
research has presented scenarios where the only viable parameters lie within an
arbitrarily small region. We provide an algorithm that learns a finite set of
promising parameters from within an infinite set. Our algorithm can help
compile a configuration portfolio, or it can be used to select the input to a
configuration algorithm for finite parameter spaces. Our approach applies to
any configuration problem that satisfies a simple yet ubiquitous structure: the
algorithm's performance is a piecewise constant function of its parameters.
Prior research has exhibited this structure in domains from integer programming
to clustering
- …