270 research outputs found
More data speeds up training time in learning halfspaces over sparse vectors
The increased availability of data in recent years has led several authors to
ask whether it is possible to use data as a {\em computational} resource. That
is, if more data is available, beyond the sample complexity limit, is it
possible to use the extra examples to speed up the computation time required to
perform the learning task?
We give the first positive answer to this question for a {\em natural
supervised learning problem} --- we consider agnostic PAC learning of
halfspaces over -sparse vectors in . This class is
inefficiently learnable using examples. Our main
contribution is a novel, non-cryptographic, methodology for establishing
computational-statistical gaps, which allows us to show that, under a widely
believed assumption that refuting random formulas is hard, it
is impossible to efficiently learn this class using only
examples. We further show that under stronger
hardness assumptions, even examples do not
suffice. On the other hand, we show a new algorithm that learns this class
efficiently using examples. This
formally establishes the tradeoff between sample and computational complexity
for a natural supervised learning problem.Comment: 13 page
Attribute-Efficient PAC Learning of Low-Degree Polynomial Threshold Functions with Nasty Noise
The concept class of low-degree polynomial threshold functions (PTFs) plays a
fundamental role in machine learning. In this paper, we study PAC learning of
-sparse degree- PTFs on , where any such concept depends
only on out of attributes of the input. Our main contribution is a new
algorithm that runs in time and under the Gaussian
marginal distribution, PAC learns the class up to error rate with
samples even when an fraction of them are corrupted by the nasty noise of
Bshouty et al. (2002), possibly the strongest corruption model. Prior to this
work, attribute-efficient robust algorithms are established only for the
special case of sparse homogeneous halfspaces. Our key ingredients are: 1) a
structural result that translates the attribute sparsity to a sparsity pattern
of the Chow vector under the basis of Hermite polynomials, and 2) a novel
attribute-efficient robust Chow vector estimation algorithm which uses
exclusively a restricted Frobenius norm to either certify a good approximation
or to validate a sparsity-induced degree- polynomial as a filter to detect
corrupted samples.Comment: ICML 202
- …