5 research outputs found
Optimal PAC Bounds Without Uniform Convergence
In statistical learning theory, determining the sample complexity of
realizable binary classification for VC classes was a long-standing open
problem. The results of Simon and Hanneke established sharp upper bounds in
this setting. However, the reliance of their argument on the uniform
convergence principle limits its applicability to more general learning
settings such as multiclass classification. In this paper, we address this
issue by providing optimal high probability risk bounds through a framework
that surpasses the limitations of uniform convergence arguments.
Our framework converts the leave-one-out error of permutation invariant
predictors into high probability risk bounds. As an application, by adapting
the one-inclusion graph algorithm of Haussler, Littlestone, and Warmuth, we
propose an algorithm that achieves an optimal PAC bound for binary
classification. Specifically, our result shows that certain aggregations of
one-inclusion graph algorithms are optimal, addressing a variant of a classic
question posed by Warmuth.
We further instantiate our framework in three settings where uniform
convergence is provably suboptimal. For multiclass classification, we prove an
optimal risk bound that scales with the one-inclusion hypergraph density of the
class, addressing the suboptimality of the analysis of Daniely and
Shalev-Shwartz. For partial hypothesis classification, we determine the optimal
sample complexity bound, resolving a question posed by Alon, Hanneke, Holzman,
and Moran. For realizable bounded regression with absolute loss, we derive an
optimal risk bound that relies on a modified version of the scale-sensitive
dimension, refining the results of Bartlett and Long. Our rates surpass
standard uniform convergence-based results due to the smaller complexity
measure in our risk bound.Comment: 27 page
Regularization and Optimal Multiclass Learning
The quintessential learning algorithm of empirical risk minimization (ERM) is
known to fail in various settings for which uniform convergence does not
characterize learning. It is therefore unsurprising that the practice of
machine learning is rife with considerably richer algorithmic techniques for
successfully controlling model capacity. Nevertheless, no such technique or
principle has broken away from the pack to characterize optimal learning in
these more general settings.
The purpose of this work is to characterize the role of regularization in
perhaps the simplest setting for which ERM fails: multiclass learning with
arbitrary label sets. Using one-inclusion graphs (OIGs), we exhibit optimal
learning algorithms that dovetail with tried-and-true algorithmic principles:
Occam's Razor as embodied by structural risk minimization (SRM), the principle
of maximum entropy, and Bayesian reasoning. Most notably, we introduce an
optimal learner which relaxes structural risk minimization on two dimensions:
it allows the regularization function to be "local" to datapoints, and uses an
unsupervised learning stage to learn this regularizer at the outset. We justify
these relaxations by showing that they are necessary: removing either dimension
fails to yield a near-optimal learner. We also extract from OIGs a
combinatorial sequence we term the Hall complexity, which is the first to
characterize a problem's transductive error rate exactly.
Lastly, we introduce a generalization of OIGs and the transductive learning
setting to the agnostic case, where we show that optimal orientations of
Hamming graphs -- judged using nodes' outdegrees minus a system of
node-dependent credits -- characterize optimal learners exactly. We demonstrate
that an agnostic version of the Hall complexity again characterizes error rates
exactly, and exhibit an optimal learner using maximum entropy programs.Comment: 40 pages, 2 figure
Sample complexity of robust learning against evasion attacks
It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. One of the fundamental problems in adversarial machine learning is to quantify how much training data is needed in the presence of so-called evasion attacks, where data is corrupted at test time. In this thesis, we work with the exact-in-the-ball notion of robustness and study the feasibility of adversarially robust learning from the perspective of learning theory, considering sample complexity.
We start with two negative results. We show that no non-trivial concept class can be robustly learned in the distribution-free setting against an adversary who can perturb just a single input bit. We then exhibit a sample-complexity lower bound: the class of monotone conjunctions and any superclass on the boolean hypercube has sample complexity at least exponential in the adversary's budget (that is, the maximum number of bits it can perturb on each input). This implies, in particular, that these classes cannot be robustly learned under the uniform distribution against an adversary who can perturb bits of the input.
As a first route to obtaining robust learning guarantees, we consider restricting the class of distributions over which training and testing data are drawn. We focus on learning problems with probability distributions on the input data that satisfy a Lipschitz condition: nearby points have similar probability. We show that, if the adversary is restricted to perturbing bits, then one can robustly learn the class of monotone conjunctions with respect to the class of log-Lipschitz distributions. We then extend this result to show the learnability of 1-decision lists, 2-decision lists and monotone k-decision lists in the same distributional and adversarial setting. We finish by showing that for every fixed k the class of k-decision lists has polynomial sample complexity against a log(n)-bounded adversary. The advantage of considering intermediate subclasses of k-decision lists is that we are able to obtain improved sample complexity bounds for these cases.
As a second route, we study learning models where the learner is given more power through the use of local queries. The first learning model we consider uses local membership queries (LMQ), where the learner can query the label of points near the training sample. We show that, under the uniform distribution, the exponential dependence on the adversary's budget to robustly learn conjunctions and any superclass remains inevitable even when the learner is given access to LMQs in addition to random examples. Faced with this negative result, we introduce a local equivalence, query oracle, which returns whether the hypothesis and target concept agree in a given region around a point in the training sample, as well as a counterexample if it exists. We show a separation result: on the one hand, if the query radius λ is strictly smaller than the adversary's perturbation budget ρ, then distribution free robust learning is impossible for a wide variety of concept classes; on the other hand, the setting λ = ρ allows us to develop robust empirical risk minimization algorithms in the distribution-free setting. We then bound the query complexity of these algorithms based on online learning guarantees and further improve these bounds for the special case of conjunctions. We follow by giving a robust learning algorithm for halfspaces on {0,1}n. Finally, since the query complexity for halfspaces on Rn is unbounded, we instead consider adversaries with bounded precision and give query complexity upper bounds in this setting as well