128 research outputs found

    Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness

    Get PDF
    Polynomial approximations to boolean functions have led to many positive results in computer science. In particular, polynomial approximations to the sign function underly algorithms for agnostically learning halfspaces, as well as pseudorandom generators for halfspaces. In this work, we investigate the limits of these techniques by proving inapproximability results for the sign function. Firstly, the polynomial regression algorithm of Kalai et al. (SIAM J. Comput. 2008) shows that halfspaces can be learned with respect to log-concave distributions on Rn\mathbb{R}^n in the challenging agnostic learning model. The power of this algorithm relies on the fact that under log-concave distributions, halfspaces can be approximated arbitrarily well by low-degree polynomials. We ask whether this technique can be extended beyond log-concave distributions, and establish a negative result. We show that polynomials of any degree cannot approximate the sign function to within arbitrarily low error for a large class of non-log-concave distributions on the real line, including those with densities proportional to exp(x0.99)\exp(-|x|^{0.99}). Secondly, we investigate the derandomization of Chernoff-type concentration inequalities. Chernoff-type tail bounds on sums of independent random variables have pervasive applications in theoretical computer science. Schmidt et al. (SIAM J. Discrete Math. 1995) showed that these inequalities can be established for sums of random variables with only O(log(1/δ))O(\log(1/\delta))-wise independence, for a tail probability of δ\delta. We show that their results are tight up to constant factors. These results rely on techniques from weighted approximation theory, which studies how well functions on the real line can be approximated by polynomials under various distributions. We believe that these techniques will have further applications in other areas of computer science.Comment: 22 page

    A Nearly Optimal Lower Bound on the Approximate Degree of AC0^0

    Full text link
    The approximate degree of a Boolean function f ⁣:{1,1}n{1,1}f \colon \{-1, 1\}^n \rightarrow \{-1, 1\} is the least degree of a real polynomial that approximates ff pointwise to error at most 1/31/3. We introduce a generic method for increasing the approximate degree of a given function, while preserving its computability by constant-depth circuits. Specifically, we show how to transform any Boolean function ff with approximate degree dd into a function FF on O(npolylog(n))O(n \cdot \operatorname{polylog}(n)) variables with approximate degree at least D=Ω(n1/3d2/3)D = \Omega(n^{1/3} \cdot d^{2/3}). In particular, if d=n1Ω(1)d= n^{1-\Omega(1)}, then DD is polynomially larger than dd. Moreover, if ff is computed by a polynomial-size Boolean circuit of constant depth, then so is FF. By recursively applying our transformation, for any constant δ>0\delta > 0 we exhibit an AC0^0 function of approximate degree Ω(n1δ)\Omega(n^{1-\delta}). This improves over the best previous lower bound of Ω(n2/3)\Omega(n^{2/3}) due to Aaronson and Shi (J. ACM 2004), and nearly matches the trivial upper bound of nn that holds for any function. Our lower bounds also apply to (quasipolynomial-size) DNFs of polylogarithmic width. We describe several applications of these results. We give: * For any constant δ>0\delta > 0, an Ω(n1δ)\Omega(n^{1-\delta}) lower bound on the quantum communication complexity of a function in AC0^0. * A Boolean function ff with approximate degree at least C(f)2o(1)C(f)^{2-o(1)}, where C(f)C(f) is the certificate complexity of ff. This separation is optimal up to the o(1)o(1) term in the exponent. * Improved secret sharing schemes with reconstruction procedures in AC0^0.Comment: 40 pages, 1 figur
    corecore