3 research outputs found
Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution
We study adversarial perturbations when the instances are uniformly
distributed over . We study both "inherent" bounds that apply to any
problem and any classifier for such a problem as well as bounds that apply to
specific problems and specific hypothesis classes.
As the current literature contains multiple definitions of adversarial risk
and robustness, we start by giving a taxonomy for these definitions based on
their goals, we identify one of them as the one guaranteeing misclassification
by pushing the instances to the error region. We then study some classic
algorithms for learning monotone conjunctions and compare their adversarial
risk and robustness under different definitions by attacking the hypotheses
using instances drawn from the uniform distribution. We observe that sometimes
these definitions lead to significantly different bounds. Thus, this study
advocates for the use of the error-region definition, even though other
definitions, in other contexts, may coincide with the error-region definition.
Using the error-region definition of adversarial perturbations, we then study
inherent bounds on risk and robustness of any classifier for any classification
problem whose instances are uniformly distributed over . Using the
isoperimetric inequality for the Boolean hypercube, we show that for initial
error , there always exists an adversarial perturbation that changes
bits of the instances to increase the risk to , making
classifier's decisions meaningless. Furthermore, by also using the central
limit theorem we show that when , at most bits
of perturbations, for a universal constant , suffice for increasing
the risk to , and the same bits of perturbations on
average suffice to increase the risk to , hence bounding the robustness by
.Comment: Full version of a work with the same title that will appear in NIPS
2018, 31 pages containing 5 figures, 1 table, 2 algorithm
Exact Learning from an Honest Teacher That Answers Membership Queries
Given a teacher that holds a function from some class of functions
. The teacher can receive from the learner an element~ in the domain
(a query) and returns the value of the function in , . The
learner goal is to find with a minimum number of queries, optimal time
complexity, and optimal resources.
In this survey, we present some of the results known from the literature,
different techniques used, some new problems, and open problems
On learning random DNF formulas under the uniform distribution
Abstract: We study the average-case learnability of DNF formulas in the model of learning from uniformly distributed random examples. We define a natural model of random monotone DNF formulas and give an efficient algorithm which with high probability can learn, for any fixed constant Ξ³> 0, a random t-term monotone DNF for any t = O(n 2βΞ³). We also define a model of random non-monotone DNF and give an efficient algorithm which with high probability can learn a random t-term DNF for any t = O(n 3/2βΞ³). These are the first known algorithms that can learn a broad class of polynomial-size DNF in a reasonable average-case model of learning from random examples. ACM Classification: I.2.6, F.2.2, G.1.2, G.