16,010 research outputs found
On the sum-of-squares degree of symmetric quadratic functions
We study how well functions over the boolean hypercube of the form
can be approximated by sums of squares of low-degree
polynomials, obtaining good bounds for the case of approximation in
-norm as well as in -norm. We describe three
complexity-theoretic applications: (1) a proof that the recent breakthrough
lower bound of Lee, Raghavendra, and Steurer on the positive semidefinite
extension complexity of the correlation and TSP polytopes cannot be improved
further by showing better sum-of-squares degree lower bounds on
-approximation of ; (2) a proof that Grigoriev's lower bound on
the degree of Positivstellensatz refutations for the knapsack problem is
optimal, answering an open question from his work; (3) bounds on the query
complexity of quantum algorithms whose expected output approximates such
functions.Comment: 33 pages. Second version fixes some typos and adds reference
Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas
We investigate the approximability of several classes of real-valued
functions by functions of a small number of variables ({\em juntas}). Our main
results are tight bounds on the number of variables required to approximate a
function within -error over
the uniform distribution: 1. If is submodular, then it is -close
to a function of variables.
This is an exponential improvement over previously known results. We note that
variables are necessary even for linear
functions. 2. If is fractionally subadditive (XOS) it is -close
to a function of variables. This result holds for all
functions with low total -influence and is a real-valued analogue of
Friedgut's theorem for boolean functions. We show that
variables are necessary even for XOS functions.
As applications of these results, we provide learning algorithms over the
uniform distribution. For XOS functions, we give a PAC learning algorithm that
runs in time . For submodular functions we give
an algorithm in the more demanding PMAC learning model (Balcan and Harvey,
2011) which requires a multiplicative factor approximation with
probability at least over the target distribution. Our uniform
distribution algorithm runs in time .
This is the first algorithm in the PMAC model that over the uniform
distribution can achieve a constant approximation factor arbitrarily close to 1
for all submodular functions. As follows from the lower bounds in (Feldman et
al., 2013) both of these algorithms are close to optimal. We also give
applications for proper learning, testing and agnostic learning with value
queries of these classes.Comment: Extended abstract appears in proceedings of FOCS 201
Approximate resilience, monotonicity, and the complexity of agnostic learning
A function is -resilient if all its Fourier coefficients of degree at
most are zero, i.e., is uncorrelated with all low-degree parities. We
study the notion of of Boolean
functions, where we say that is -approximately -resilient if
is -close to a -valued -resilient function in
distance. We show that approximate resilience essentially characterizes the
complexity of agnostic learning of a concept class over the uniform
distribution. Roughly speaking, if all functions in a class are far from
being -resilient then can be learned agnostically in time and
conversely, if contains a function close to being -resilient then
agnostic learning of in the statistical query (SQ) framework of Kearns has
complexity of at least . This characterization is based on the
duality between approximation by degree- polynomials and
approximate -resilience that we establish. In particular, it implies that
approximation by low-degree polynomials, known to be sufficient for
agnostic learning over product distributions, is in fact necessary.
Focusing on monotone Boolean functions, we exhibit the existence of
near-optimal -approximately
-resilient monotone functions for all
. Prior to our work, it was conceivable even that every monotone
function is -far from any -resilient function. Furthermore, we
construct simple, explicit monotone functions based on and that are close to highly resilient functions. Our constructions are
based on a fairly general resilience analysis and amplification. These
structural results, together with the characterization, imply nearly optimal
lower bounds for agnostic learning of monotone juntas
The intersection of two halfspaces has high threshold degree
The threshold degree of a Boolean function f:{0,1}^n->{-1,+1} is the least
degree of a real polynomial p such that f(x)=sgn p(x). We construct two
halfspaces on {0,1}^n whose intersection has threshold degree Theta(sqrt n), an
exponential improvement on previous lower bounds. This solves an open problem
due to Klivans (2002) and rules out the use of perceptron-based techniques for
PAC learning the intersection of two halfspaces, a central unresolved challenge
in computational learning. We also prove that the intersection of two majority
functions has threshold degree Omega(log n), which is tight and settles a
conjecture of O'Donnell and Servedio (2003).
Our proof consists of two parts. First, we show that for any nonconstant
Boolean functions f and g, the intersection f(x)^g(y) has threshold degree O(d)
if and only if ||f-F||_infty + ||g-G||_infty < 1 for some rational functions F,
G of degree O(d). Second, we settle the least degree required for approximating
a halfspace and a majority function to any given accuracy by rational
functions.
Our technique further allows us to make progress on Aaronson's challenge
(2008) and contribute strong direct product theorems for polynomial
representations of composed Boolean functions of the form F(f_1,...,f_n). In
particular, we give an improved lower bound on the approximate degree of the
AND-OR tree.Comment: Full version of the FOCS'09 pape
Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness
Polynomial approximations to boolean functions have led to many positive
results in computer science. In particular, polynomial approximations to the
sign function underly algorithms for agnostically learning halfspaces, as well
as pseudorandom generators for halfspaces. In this work, we investigate the
limits of these techniques by proving inapproximability results for the sign
function.
Firstly, the polynomial regression algorithm of Kalai et al. (SIAM J. Comput.
2008) shows that halfspaces can be learned with respect to log-concave
distributions on in the challenging agnostic learning model. The
power of this algorithm relies on the fact that under log-concave
distributions, halfspaces can be approximated arbitrarily well by low-degree
polynomials. We ask whether this technique can be extended beyond log-concave
distributions, and establish a negative result. We show that polynomials of any
degree cannot approximate the sign function to within arbitrarily low error for
a large class of non-log-concave distributions on the real line, including
those with densities proportional to .
Secondly, we investigate the derandomization of Chernoff-type concentration
inequalities. Chernoff-type tail bounds on sums of independent random variables
have pervasive applications in theoretical computer science. Schmidt et al.
(SIAM J. Discrete Math. 1995) showed that these inequalities can be established
for sums of random variables with only -wise independence,
for a tail probability of . We show that their results are tight up to
constant factors.
These results rely on techniques from weighted approximation theory, which
studies how well functions on the real line can be approximated by polynomials
under various distributions. We believe that these techniques will have further
applications in other areas of computer science.Comment: 22 page
Oblivious Bounds on the Probability of Boolean Functions
This paper develops upper and lower bounds for the probability of Boolean
functions by treating multiple occurrences of variables as independent and
assigning them new individual probabilities. We call this approach dissociation
and give an exact characterization of optimal oblivious bounds, i.e. when the
new probabilities are chosen independent of the probabilities of all other
variables. Our motivation comes from the weighted model counting problem (or,
equivalently, the problem of computing the probability of a Boolean function),
which is #P-hard in general. By performing several dissociations, one can
transform a Boolean formula whose probability is difficult to compute, into one
whose probability is easy to compute, and which is guaranteed to provide an
upper or lower bound on the probability of the original formula by choosing
appropriate probabilities for the dissociated variables. Our new bounds shed
light on the connection between previous relaxation-based and model-based
approximations and unify them as concrete choices in a larger design space. We
also show how our theory allows a standard relational database management
system (DBMS) to both upper and lower bound hard probabilistic queries in
guaranteed polynomial time.Comment: 34 pages, 14 figures, supersedes: http://arxiv.org/abs/1105.281
- β¦