3 research outputs found

    Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution

    Full text link
    We study adversarial perturbations when the instances are uniformly distributed over {0,1}n\{0,1\}^n. We study both "inherent" bounds that apply to any problem and any classifier for such a problem as well as bounds that apply to specific problems and specific hypothesis classes. As the current literature contains multiple definitions of adversarial risk and robustness, we start by giving a taxonomy for these definitions based on their goals, we identify one of them as the one guaranteeing misclassification by pushing the instances to the error region. We then study some classic algorithms for learning monotone conjunctions and compare their adversarial risk and robustness under different definitions by attacking the hypotheses using instances drawn from the uniform distribution. We observe that sometimes these definitions lead to significantly different bounds. Thus, this study advocates for the use of the error-region definition, even though other definitions, in other contexts, may coincide with the error-region definition. Using the error-region definition of adversarial perturbations, we then study inherent bounds on risk and robustness of any classifier for any classification problem whose instances are uniformly distributed over {0,1}n\{0,1\}^n. Using the isoperimetric inequality for the Boolean hypercube, we show that for initial error 0.010.01, there always exists an adversarial perturbation that changes O(n)O(\sqrt{n}) bits of the instances to increase the risk to 0.50.5, making classifier's decisions meaningless. Furthermore, by also using the central limit theorem we show that when nβ†’βˆžn\to \infty, at most cβ‹…nc \cdot \sqrt{n} bits of perturbations, for a universal constant c<1.17c< 1.17, suffice for increasing the risk to 0.50.5, and the same cβ‹…nc \cdot \sqrt{n} bits of perturbations on average suffice to increase the risk to 11, hence bounding the robustness by cβ‹…nc \cdot \sqrt{n}.Comment: Full version of a work with the same title that will appear in NIPS 2018, 31 pages containing 5 figures, 1 table, 2 algorithm

    Exact Learning from an Honest Teacher That Answers Membership Queries

    Full text link
    Given a teacher that holds a function f:Xβ†’Rf:X\to R from some class of functions CC. The teacher can receive from the learner an element~dd in the domain XX (a query) and returns the value of the function in dd, f(d)∈Rf(d)\in R. The learner goal is to find ff with a minimum number of queries, optimal time complexity, and optimal resources. In this survey, we present some of the results known from the literature, different techniques used, some new problems, and open problems

    On learning random DNF formulas under the uniform distribution

    No full text
    Abstract: We study the average-case learnability of DNF formulas in the model of learning from uniformly distributed random examples. We define a natural model of random monotone DNF formulas and give an efficient algorithm which with high probability can learn, for any fixed constant Ξ³&gt; 0, a random t-term monotone DNF for any t = O(n 2βˆ’Ξ³). We also define a model of random non-monotone DNF and give an efficient algorithm which with high probability can learn a random t-term DNF for any t = O(n 3/2βˆ’Ξ³). These are the first known algorithms that can learn a broad class of polynomial-size DNF in a reasonable average-case model of learning from random examples. ACM Classification: I.2.6, F.2.2, G.1.2, G.
    corecore