166 research outputs found
Moment-Matching Polynomials
We give a new framework for proving the existence of low-degree, polynomial
approximators for Boolean functions with respect to broad classes of
non-product distributions. Our proofs use techniques related to the classical
moment problem and deviate significantly from known Fourier-based methods,
which require the underlying distribution to have some product structure.
Our main application is the first polynomial-time algorithm for agnostically
learning any function of a constant number of halfspaces with respect to any
log-concave distribution (for any constant accuracy parameter). This result was
not known even for the case of learning the intersection of two halfspaces
without noise. Additionally, we show that in the "smoothed-analysis" setting,
the above results hold with respect to distributions that have sub-exponential
tails, a property satisfied by many natural and well-studied distributions in
machine learning.
Given that our algorithms can be implemented using Support Vector Machines
(SVMs) with a polynomial kernel, these results give a rigorous theoretical
explanation as to why many kernel methods work so well in practice
More data speeds up training time in learning halfspaces over sparse vectors
The increased availability of data in recent years has led several authors to
ask whether it is possible to use data as a {\em computational} resource. That
is, if more data is available, beyond the sample complexity limit, is it
possible to use the extra examples to speed up the computation time required to
perform the learning task?
We give the first positive answer to this question for a {\em natural
supervised learning problem} --- we consider agnostic PAC learning of
halfspaces over -sparse vectors in . This class is
inefficiently learnable using examples. Our main
contribution is a novel, non-cryptographic, methodology for establishing
computational-statistical gaps, which allows us to show that, under a widely
believed assumption that refuting random formulas is hard, it
is impossible to efficiently learn this class using only
examples. We further show that under stronger
hardness assumptions, even examples do not
suffice. On the other hand, we show a new algorithm that learns this class
efficiently using examples. This
formally establishes the tradeoff between sample and computational complexity
for a natural supervised learning problem.Comment: 13 page
- …