8 research outputs found

    PAC Classification based on PAC Estimates of Label Class Distributions

    Full text link
    A standard approach in pattern classification is to estimate the distributions of the label classes, and then to apply the Bayes classifier to the estimates of the distributions in order to classify unlabeled examples. As one might expect, the better our estimates of the label class distributions, the better the resulting classifier will be. In this paper we make this observation precise by identifying risk bounds of a classifier in terms of the quality of the estimates of the label class distributions. We show how PAC learnability relates to estimates of the distributions that have a PAC guarantee on their L1L_1 distance from the true distribution, and we bound the increase in negative log likelihood risk in terms of PAC bounds on the KL-divergence. We give an inefficient but general-purpose smoothing method for converting an estimated distribution that is good under the L1L_1 metric into a distribution that is good under the KL-divergence.Comment: 14 page

    Pattern classification via unsupervised learners

    Get PDF
    We consider classification problems in a variant of the Probably Approximately Correct (PAC)-learning framework, in which an unsupervised learner creates a discriminant function over each class and observations are labeled by the learner returning the highest value associated with that observation. Consideration is given to whether this approach gains significant advantage over traditional discriminant techniques. It is shown that PAC-learning distributions over class labels under Ll distance or KL-divergence implies PAC classification in this framework. We give bounds on the regret associated with the resulting classifier, taking into account the possibility of variable misclassification penalties. We demonstrate the advantage of estimating the a posteriori probability distributions over class labels in the setting of Optical Character Recognition. We show that unsupervised learners can be used to learn a class of probabilistic concepts (stochastic rules denoting the probability that an observation has a positive label in a 2-class setting). This demonstrates a situation where unsupervised learners can be used even when it is hard to learn distributions over class labels - in this case the discriminant functions do not estimate the class probability densities. We use a standard state-merging technique to PAC-learn a class of probabilistic automata and show that by learning the distribution over outputs under the weaker L1 distance rather than KL-divergence we are able to learn without knowledge of the expected length of an output. It is also shown that for a restricted class of these automata learning under L1 distance is equivalent to learning under KL-divergence

    Sample complexity of robust learning against evasion attacks

    Get PDF
    It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. One of the fundamental problems in adversarial machine learning is to quantify how much training data is needed in the presence of so-called evasion attacks, where data is corrupted at test time. In this thesis, we work with the exact-in-the-ball notion of robustness and study the feasibility of adversarially robust learning from the perspective of learning theory, considering sample complexity. We start with two negative results. We show that no non-trivial concept class can be robustly learned in the distribution-free setting against an adversary who can perturb just a single input bit. We then exhibit a sample-complexity lower bound: the class of monotone conjunctions and any superclass on the boolean hypercube has sample complexity at least exponential in the adversary's budget (that is, the maximum number of bits it can perturb on each input). This implies, in particular, that these classes cannot be robustly learned under the uniform distribution against an adversary who can perturb ω(logn)\omega(\log n) bits of the input. As a first route to obtaining robust learning guarantees, we consider restricting the class of distributions over which training and testing data are drawn. We focus on learning problems with probability distributions on the input data that satisfy a Lipschitz condition: nearby points have similar probability. We show that, if the adversary is restricted to perturbing O(logn)O(\log n) bits, then one can robustly learn the class of monotone conjunctions with respect to the class of log-Lipschitz distributions. We then extend this result to show the learnability of 1-decision lists, 2-decision lists and monotone k-decision lists in the same distributional and adversarial setting. We finish by showing that for every fixed k the class of k-decision lists has polynomial sample complexity against a log(n)-bounded adversary. The advantage of considering intermediate subclasses of k-decision lists is that we are able to obtain improved sample complexity bounds for these cases. As a second route, we study learning models where the learner is given more power through the use of local queries. The first learning model we consider uses local membership queries (LMQ), where the learner can query the label of points near the training sample. We show that, under the uniform distribution, the exponential dependence on the adversary's budget to robustly learn conjunctions and any superclass remains inevitable even when the learner is given access to LMQs in addition to random examples. Faced with this negative result, we introduce a local equivalence, query oracle, which returns whether the hypothesis and target concept agree in a given region around a point in the training sample, as well as a counterexample if it exists. We show a separation result: on the one hand, if the query radius λ is strictly smaller than the adversary's perturbation budget ρ, then distribution free robust learning is impossible for a wide variety of concept classes; on the other hand, the setting λ = ρ allows us to develop robust empirical risk minimization algorithms in the distribution-free setting. We then bound the query complexity of these algorithms based on online learning guarantees and further improve these bounds for the special case of conjunctions. We follow by giving a robust learning algorithm for halfspaces on {0,1}n. Finally, since the query complexity for halfspaces on Rn is unbounded, we instead consider adversaries with bounded precision and give query complexity upper bounds in this setting as well

    Pattern classification via unsupervised learners

    Get PDF
    We consider classification problems in a variant of the Probably Approximately Correct (PAC)-learning framework, in which an unsupervised learner creates a discriminant function over each class and observations are labeled by the learner returning the highest value associated with that observation. Consideration is given to whether this approach gains significant advantage over traditional discriminant techniques. It is shown that PAC-learning distributions over class labels under Ll distance or KL-divergence implies PAC classification in this framework. We give bounds on the regret associated with the resulting classifier, taking into account the possibility of variable misclassification penalties. We demonstrate the advantage of estimating the a posteriori probability distributions over class labels in the setting of Optical Character Recognition. We show that unsupervised learners can be used to learn a class of probabilistic concepts (stochastic rules denoting the probability that an observation has a positive label in a 2-class setting). This demonstrates a situation where unsupervised learners can be used even when it is hard to learn distributions over class labels - in this case the discriminant functions do not estimate the class probability densities. We use a standard state-merging technique to PAC-learn a class of probabilistic automata and show that by learning the distribution over outputs under the weaker L1 distance rather than KL-divergence we are able to learn without knowledge of the expected length of an output. It is also shown that for a restricted class of these automata learning under L1 distance is equivalent to learning under KL-divergence.EThOS - Electronic Theses Online ServiceEngineering and Physical Sciences Research Council (Great Britain) (EPSRC) (GR/R86188/01)GBUnited Kingdo

    Some discriminant-based PAC algorithms

    No full text
    A classical approach in multi-class pattern classification is the following. Estimate the probability distributions that generated the observations for each label class, and then label new instances by applying the Bayes classifier to the estimated distributions. That approach provides more useful information than just a class label; it also provides estimates of the conditional distribution of class labels, in situations where there is class overlap. We would like to know whether it is harder to build accurate classifiers via this approach, than by techniques that may process all data with distinct labels together. In this paper we make that question precise by considering it in the context of PAC learnability. We propose two restrictions on the PAC learning framework that are intended to correspond with the above approach, and consider their relationship with standard PAC learning. Our main restriction of interest leads to some interesting algorithms that show that the restriction is not stronger (more restrictive) than various other well-known restrictions on PAC learning. An alternative slightly milder restriction turns out to be almost equivalent to unrestricted PAC learning

    Some Discriminant-Based PAC Algorithms

    No full text
    A classical approach in multi-class pattern classification is the following. Estimate the probability distributions that generated the observations for each label class, and then label new instances by applying the Bayes classifier to the estimated distributions. That approach provides more useful information than just a class label; it also provides estimates of the conditional distribution of class labels, in situations where there is class overlap. We woul

    Some Discriminant-based PAC Algorithms *

    No full text
    Abstract A classical approach in multi-class pattern classification is the following. Estimate prob-ability distributions that generated the observations for each label class, and then label ne
    corecore