32 research outputs found

    The Correct Exponent for the Gotsman-Linial Conjecture

    Get PDF
    We prove a new bound on the average sensitivity of polynomial threshold functions. In particular we show that a polynomial threshold function of degree dd in at most nn variables has average sensitivity at most n(log(n))O(dlog(d))2O(d2log(d)\sqrt{n}(\log(n))^{O(d\log(d))}2^{O(d^2\log(d)}. For fixed dd the exponent in terms of nn in this bound is known to be optimal. This bound makes significant progress towards the Gotsman-Linial Conjecture which would put the correct bound at Θ(dn)\Theta(d\sqrt{n})

    The Average Sensitivity of an Intersection of Half Spaces

    Get PDF
    We prove new bounds on the average sensitivity of the indicator function of an intersection of kk halfspaces. In particular, we prove the optimal bound of O(nlog(k))O(\sqrt{n\log(k)}). This generalizes a result of Nazarov, who proved the analogous result in the Gaussian case, and improves upon a result of Harsha, Klivans and Meka. Furthermore, our result has implications for the runtime required to learn intersections of halfspaces

    Special Issue “Conference on Computational Complexity 2013” Guest editor’s foreword

    Full text link

    Geometric and o-minimal Littlewood-Offord problems

    Full text link
    The classical Erd\H{o}s-Littlewood-Offord theorem says that for nonzero vectors a1,,anRda_1,\dots,a_n\in \mathbb{R}^d, any xRdx\in \mathbb{R}^d, and uniformly random (ξ1,,ξn){1,1}n(\xi_1,\dots,\xi_n)\in\{-1,1\}^n, we have Pr(a1ξ1++anξn=x)=O(n1/2)\Pr(a_1\xi_1+\dots+a_n\xi_n=x)=O(n^{-1/2}). In this paper we show that Pr(a1ξ1++anξnS)n1/2+o(1)\Pr(a_1\xi_1+\dots+a_n\xi_n\in S)\le n^{-1/2+o(1)} whenever SS is definable with respect to an o-minimal structure (for example, this holds when SS is any algebraic hypersurface), under the necessary condition that it does not contain a line segment. We also obtain an inverse theorem in this setting

    What Circuit Classes Can Be Learned with Non-Trivial Savings?

    Get PDF
    Despite decades of intensive research, efficient - or even sub-exponential time - distribution-free PAC learning algorithms are not known for many important Boolean function classes. In this work we suggest a new perspective on these learning problems, inspired by a surge of recent research in complexity theory, in which the goal is to determine whether and how much of a savings over a naive 2^n runtime can be achieved. We establish a range of exploratory results towards this end. In more detail, (1) We first observe that a simple approach building on known uniform-distribution learning results gives non-trivial distribution-free learning algorithms for several well-studied classes including AC0, arbitrary functions of a few linear threshold functions (LTFs), and AC0 augmented with mod_p gates. (2) Next we present an approach, based on the method of random restrictions from circuit complexity, which can be used to obtain several distribution-free learning algorithms that do not appear to be achievable by approach (1) above. The results achieved in this way include learning algorithms with non-trivial savings for LTF-of-AC0 circuits and improved savings for learning parity-of-AC0 circuits. (3) Finally, our third contribution is a generic technique for converting lower bounds proved using Neciporuk\u27s method to learning algorithms with non-trivial savings. This technique, which is the most involved of our three approaches, yields distribution-free learning algorithms for a range of classes where previously even non-trivial uniform-distribution learning algorithms were not known; these classes include full-basis formulas, branching programs, span programs, etc. up to some fixed polynomial size
    corecore