270 research outputs found

    More data speeds up training time in learning halfspaces over sparse vectors

    Full text link
    The increased availability of data in recent years has led several authors to ask whether it is possible to use data as a {\em computational} resource. That is, if more data is available, beyond the sample complexity limit, is it possible to use the extra examples to speed up the computation time required to perform the learning task? We give the first positive answer to this question for a {\em natural supervised learning problem} --- we consider agnostic PAC learning of halfspaces over 33-sparse vectors in {1,1,0}n\{-1,1,0\}^n. This class is inefficiently learnable using O(n/ϵ2)O\left(n/\epsilon^2\right) examples. Our main contribution is a novel, non-cryptographic, methodology for establishing computational-statistical gaps, which allows us to show that, under a widely believed assumption that refuting random 3CNF\mathrm{3CNF} formulas is hard, it is impossible to efficiently learn this class using only O(n/ϵ2)O\left(n/\epsilon^2\right) examples. We further show that under stronger hardness assumptions, even O(n1.499/ϵ2)O\left(n^{1.499}/\epsilon^2\right) examples do not suffice. On the other hand, we show a new algorithm that learns this class efficiently using Ω~(n2/ϵ2)\tilde{\Omega}\left(n^2/\epsilon^2\right) examples. This formally establishes the tradeoff between sample and computational complexity for a natural supervised learning problem.Comment: 13 page

    Attribute-Efficient PAC Learning of Low-Degree Polynomial Threshold Functions with Nasty Noise

    Full text link
    The concept class of low-degree polynomial threshold functions (PTFs) plays a fundamental role in machine learning. In this paper, we study PAC learning of KK-sparse degree-dd PTFs on Rn\mathbb{R}^n, where any such concept depends only on KK out of nn attributes of the input. Our main contribution is a new algorithm that runs in time (nd/ϵ)O(d)({nd}/{\epsilon})^{O(d)} and under the Gaussian marginal distribution, PAC learns the class up to error rate ϵ\epsilon with O(K4dϵ2dlog5dn)O(\frac{K^{4d}}{\epsilon^{2d}} \cdot \log^{5d} n) samples even when an ηO(ϵd)\eta \leq O(\epsilon^d) fraction of them are corrupted by the nasty noise of Bshouty et al. (2002), possibly the strongest corruption model. Prior to this work, attribute-efficient robust algorithms are established only for the special case of sparse homogeneous halfspaces. Our key ingredients are: 1) a structural result that translates the attribute sparsity to a sparsity pattern of the Chow vector under the basis of Hermite polynomials, and 2) a novel attribute-efficient robust Chow vector estimation algorithm which uses exclusively a restricted Frobenius norm to either certify a good approximation or to validate a sparsity-induced degree-2d2d polynomial as a filter to detect corrupted samples.Comment: ICML 202
    corecore