166 research outputs found

    Moment-Matching Polynomials

    Full text link
    We give a new framework for proving the existence of low-degree, polynomial approximators for Boolean functions with respect to broad classes of non-product distributions. Our proofs use techniques related to the classical moment problem and deviate significantly from known Fourier-based methods, which require the underlying distribution to have some product structure. Our main application is the first polynomial-time algorithm for agnostically learning any function of a constant number of halfspaces with respect to any log-concave distribution (for any constant accuracy parameter). This result was not known even for the case of learning the intersection of two halfspaces without noise. Additionally, we show that in the "smoothed-analysis" setting, the above results hold with respect to distributions that have sub-exponential tails, a property satisfied by many natural and well-studied distributions in machine learning. Given that our algorithms can be implemented using Support Vector Machines (SVMs) with a polynomial kernel, these results give a rigorous theoretical explanation as to why many kernel methods work so well in practice

    More data speeds up training time in learning halfspaces over sparse vectors

    Full text link
    The increased availability of data in recent years has led several authors to ask whether it is possible to use data as a {\em computational} resource. That is, if more data is available, beyond the sample complexity limit, is it possible to use the extra examples to speed up the computation time required to perform the learning task? We give the first positive answer to this question for a {\em natural supervised learning problem} --- we consider agnostic PAC learning of halfspaces over 33-sparse vectors in {−1,1,0}n\{-1,1,0\}^n. This class is inefficiently learnable using O(n/ϵ2)O\left(n/\epsilon^2\right) examples. Our main contribution is a novel, non-cryptographic, methodology for establishing computational-statistical gaps, which allows us to show that, under a widely believed assumption that refuting random 3CNF\mathrm{3CNF} formulas is hard, it is impossible to efficiently learn this class using only O(n/ϵ2)O\left(n/\epsilon^2\right) examples. We further show that under stronger hardness assumptions, even O(n1.499/ϵ2)O\left(n^{1.499}/\epsilon^2\right) examples do not suffice. On the other hand, we show a new algorithm that learns this class efficiently using Ω~(n2/ϵ2)\tilde{\Omega}\left(n^2/\epsilon^2\right) examples. This formally establishes the tradeoff between sample and computational complexity for a natural supervised learning problem.Comment: 13 page
    • …
    corecore