1 research outputs found
Tight Bounds on Approximation and Learning of Self-Bounding Functions
We study the complexity of learning and approximation of self-bounding
functions over the uniform distribution on the Boolean hypercube .
Informally, a function is self-bounding if
for every , upper bounds the sum of all the marginal
decreases in the value of the function at . Self-bounding functions include
such well-known classes of functions as submodular and fractionally-subadditive
(XOS) functions. They were introduced by Boucheron et al. (2000) in the context
of concentration of measure inequalities. Our main result is a nearly tight
-approximation of self-bounding functions by low-degree juntas.
Specifically, all self-bounding functions can be -approximated in
by a polynomial of degree over
variables. We show that both the degree and
junta-size are optimal up to logarithmic terms. Previous techniques considered
stronger approximation and proved nearly tight bounds of
on the degree and on the
number of variables. Our bounds rely on the analysis of noise stability of
self-bounding functions together with a stronger connection between noise
stability and approximation by low-degree polynomials. This technique
can also be used to get tighter bounds on approximation by low-degree
polynomials and faster learning algorithm for halfspaces.
These results lead to improved and in several cases almost tight bounds for
PAC and agnostic learning of self-bounding functions relative to the uniform
distribution. In particular, assuming hardness of learning juntas, we show that
PAC and agnostic learning of self-bounding functions have complexity of
.Comment: Fixed minor mistakes and typo