241 research outputs found
Recommended from our members
Property Testing of Boolean Function
The field of property testing has been studied for decades, and Boolean functions are among the most classical subjects to study in this area.
In this thesis we consider the property testing of Boolean functions: distinguishing whether an unknown Boolean function has some certain property (or equivalently, belongs to a certain class of functions), or is far from having this property. We study this problem under both the standard setting, where the distance between functions is measured with respect to the uniform distribution, as well as the distribution-free setting, where the distance is measured with respect to a fixed but unknown distribution.
We obtain both new upper bounds and lower bounds for the query complexity of testing various properties of Boolean functions:
- Under the standard model of property testing, we prove a lower bound of \Omega(n^{1/3}) for the query complexity of any adaptive algorithm that tests whether an n-variable Boolean function is monotone, improving the previous best lower bound of \Omega(n^{1/4}) by Belov and Blais in 2015. We also prove a lower bound of \Omega(n^{2/3}) for adaptive algorithms, and a lower bound of \Omega(n) for non-adaptive algorithms with one-sided errors that test unateness, a natural generalization of monotonicity. The latter lower bound matches the previous upper bound proved by Chakrabarty and Seshadhri in 2016, up to poly-logarithmic factors of n.
- We also study the distribution-free testing of k-juntas, where a function is a k-junta if it depends on at most k out of its n input variables. The standard property testing of k-juntas under the uniform distribution has been well understood: it has been shown that, for adaptive testing of k-juntas the optimal query complexity is \Theta(k); and for non-adaptive testing of k-juntas it is \Theta(k^{3/2}). Both bounds are tight up to poly-logarithmic factors of k. However, this problem is far from clear under the more general setting of distribution-free testing. Previous results only imply an O(2^k)-query algorithm for distribution-free testing of k-juntas, and besides lower bounds under the uniform distribution setting that naturally extend to this more general setting, no other results were known from the lower bound side. We significantly improve these results with an O(k^2)-query adaptive distribution-free tester for k-juntas, as well as an exponential lower bound of \Omega(2^{k/3}) for the query complexity of non-adaptive distribution-free testers for this problem. These results illustrate the hardness of distribution-free testing and also the significant role of adaptivity under this setting.
- In the end we also study distribution-free testing of other basic Boolean functions. Under the distribution-free setting, a lower bound of \Omega(n^{1/5}) was proved for testing of conjunctions, decision lists, and linear threshold functions by Glasner and Servedio in 2009, and an O(n^{1/3})-query algorithm for testing monotone conjunctions was shown by Dolev and Ron in 2011. Building on techniques developed in these two papers, we improve these lower bounds to \Omega(n^{1/3}), and specifically for the class of conjunctions we present an adaptive algorithm with query complexity O(n^{1/3}). Our lower and upper bounds are tight for testing conjunctions, up to poly-logarithmic factors of n
Sample complexity of robust learning against evasion attacks
It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. One of the fundamental problems in adversarial machine learning is to quantify how much training data is needed in the presence of so-called evasion attacks, where data is corrupted at test time. In this thesis, we work with the exact-in-the-ball notion of robustness and study the feasibility of adversarially robust learning from the perspective of learning theory, considering sample complexity.
We start with two negative results. We show that no non-trivial concept class can be robustly learned in the distribution-free setting against an adversary who can perturb just a single input bit. We then exhibit a sample-complexity lower bound: the class of monotone conjunctions and any superclass on the boolean hypercube has sample complexity at least exponential in the adversary's budget (that is, the maximum number of bits it can perturb on each input). This implies, in particular, that these classes cannot be robustly learned under the uniform distribution against an adversary who can perturb bits of the input.
As a first route to obtaining robust learning guarantees, we consider restricting the class of distributions over which training and testing data are drawn. We focus on learning problems with probability distributions on the input data that satisfy a Lipschitz condition: nearby points have similar probability. We show that, if the adversary is restricted to perturbing bits, then one can robustly learn the class of monotone conjunctions with respect to the class of log-Lipschitz distributions. We then extend this result to show the learnability of 1-decision lists, 2-decision lists and monotone k-decision lists in the same distributional and adversarial setting. We finish by showing that for every fixed k the class of k-decision lists has polynomial sample complexity against a log(n)-bounded adversary. The advantage of considering intermediate subclasses of k-decision lists is that we are able to obtain improved sample complexity bounds for these cases.
As a second route, we study learning models where the learner is given more power through the use of local queries. The first learning model we consider uses local membership queries (LMQ), where the learner can query the label of points near the training sample. We show that, under the uniform distribution, the exponential dependence on the adversary's budget to robustly learn conjunctions and any superclass remains inevitable even when the learner is given access to LMQs in addition to random examples. Faced with this negative result, we introduce a local equivalence, query oracle, which returns whether the hypothesis and target concept agree in a given region around a point in the training sample, as well as a counterexample if it exists. We show a separation result: on the one hand, if the query radius Ī» is strictly smaller than the adversary's perturbation budget Ļ, then distribution free robust learning is impossible for a wide variety of concept classes; on the other hand, the setting Ī» = Ļ allows us to develop robust empirical risk minimization algorithms in the distribution-free setting. We then bound the query complexity of these algorithms based on online learning guarantees and further improve these bounds for the special case of conjunctions. We follow by giving a robust learning algorithm for halfspaces on {0,1}n. Finally, since the query complexity for halfspaces on Rn is unbounded, we instead consider adversaries with bounded precision and give query complexity upper bounds in this setting as well
Specification and Simulation of Statistical Query Algorithms for Efficiency and Noise Tolerance
AbstractA recent innovation in computational learning theory is the statistical query (SQ) model. The advantage of specifying learning algorithms in this model is that SQ algorithms can be simulated in the probably approximately correct (PAC) model, both in the absenceandin the presence of noise. However, simulations of SQ algorithms in the PAC model have non-optimal time and sample complexities. In this paper, we introduce a new method for specifying statistical query algorithms based on a type ofrelative errorand provide simulations in the noise-free and noise-tolerant PAC models which yield more efficient algorithms. Requests for estimates of statistics in this new model take the following form: āReturn an estimate of the statistic within a 1Ā±Ī¼factor, or return ā„, promising that the statistic is less thanĪø.ā In addition to showing that this is a very natural language for specifying learning algorithms, we also show that this new specification is polynomially equivalent to standard SQ, and thus, known learnability and hardness results for statistical query learning are preserved. We then give highly efficient PAC simulations of relative error SQ algorithms. We show that the learning algorithms obtained by simulating efficient relative error SQ algorithms both in the absence of noise and in the presence of malicious noise have roughly optimal sample complexity. We also show that the simulation of efficient relative error SQ algorithms in the presence of classification noise yields learning algorithms at least as efficient as those obtained through standard methods, and in some cases improved, roughly optimal results are achieved. The sample complexities for all of these simulations are based on thedĪ½metric, which is a type of relative error metric useful for quantities which are small or even zero. We show that uniform convergence with respect to thedĪ½metric yields āuniform convergenceā with respect to (Ī¼,Ā Īø) accuracy. Finally, while we show that manyspecificlearning algorithms can be written as highly efficient relative error SQ algorithms, we also show, in fact, thatallSQ algorithms can be written efficiently by proving general upper bounds on the complexity of (Ī¼,Ā Īø) queries as a function of the accuracy parameterĪµ. As a consequence of this result, we give general upper bounds on the complexity of learning algorithms achieved through the use of relative error SQ algorithms and the simulations described above
Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas
We investigate the approximability of several classes of real-valued
functions by functions of a small number of variables ({\em juntas}). Our main
results are tight bounds on the number of variables required to approximate a
function within -error over
the uniform distribution: 1. If is submodular, then it is -close
to a function of variables.
This is an exponential improvement over previously known results. We note that
variables are necessary even for linear
functions. 2. If is fractionally subadditive (XOS) it is -close
to a function of variables. This result holds for all
functions with low total -influence and is a real-valued analogue of
Friedgut's theorem for boolean functions. We show that
variables are necessary even for XOS functions.
As applications of these results, we provide learning algorithms over the
uniform distribution. For XOS functions, we give a PAC learning algorithm that
runs in time . For submodular functions we give
an algorithm in the more demanding PMAC learning model (Balcan and Harvey,
2011) which requires a multiplicative factor approximation with
probability at least over the target distribution. Our uniform
distribution algorithm runs in time .
This is the first algorithm in the PMAC model that over the uniform
distribution can achieve a constant approximation factor arbitrarily close to 1
for all submodular functions. As follows from the lower bounds in (Feldman et
al., 2013) both of these algorithms are close to optimal. We also give
applications for proper learning, testing and agnostic learning with value
queries of these classes.Comment: Extended abstract appears in proceedings of FOCS 201
- ā¦