1,717 research outputs found
Approximate resilience, monotonicity, and the complexity of agnostic learning
A function is -resilient if all its Fourier coefficients of degree at
most are zero, i.e., is uncorrelated with all low-degree parities. We
study the notion of of Boolean
functions, where we say that is -approximately -resilient if
is -close to a -valued -resilient function in
distance. We show that approximate resilience essentially characterizes the
complexity of agnostic learning of a concept class over the uniform
distribution. Roughly speaking, if all functions in a class are far from
being -resilient then can be learned agnostically in time and
conversely, if contains a function close to being -resilient then
agnostic learning of in the statistical query (SQ) framework of Kearns has
complexity of at least . This characterization is based on the
duality between approximation by degree- polynomials and
approximate -resilience that we establish. In particular, it implies that
approximation by low-degree polynomials, known to be sufficient for
agnostic learning over product distributions, is in fact necessary.
Focusing on monotone Boolean functions, we exhibit the existence of
near-optimal -approximately
-resilient monotone functions for all
. Prior to our work, it was conceivable even that every monotone
function is -far from any -resilient function. Furthermore, we
construct simple, explicit monotone functions based on and that are close to highly resilient functions. Our constructions are
based on a fairly general resilience analysis and amplification. These
structural results, together with the characterization, imply nearly optimal
lower bounds for agnostic learning of monotone juntas
Learning with Errors is easy with quantum samples
Learning with Errors is one of the fundamental problems in computational
learning theory and has in the last years become the cornerstone of
post-quantum cryptography. In this work, we study the quantum sample complexity
of Learning with Errors and show that there exists an efficient quantum
learning algorithm (with polynomial sample and time complexity) for the
Learning with Errors problem where the error distribution is the one used in
cryptography. While our quantum learning algorithm does not break the LWE-based
encryption schemes proposed in the cryptography literature, it does have some
interesting implications for cryptography: first, when building an LWE-based
scheme, one needs to be careful about the access to the public-key generation
algorithm that is given to the adversary; second, our algorithm shows a
possible way for attacking LWE-based encryption by using classical samples to
approximate the quantum sample state, since then using our quantum learning
algorithm would solve LWE
Learning DNFs under product distributions via {\mu}-biased quantum Fourier sampling
We show that DNF formulae can be quantum PAC-learned in polynomial time under
product distributions using a quantum example oracle. The best classical
algorithm (without access to membership queries) runs in superpolynomial time.
Our result extends the work by Bshouty and Jackson (1998) that proved that DNF
formulae are efficiently learnable under the uniform distribution using a
quantum example oracle. Our proof is based on a new quantum algorithm that
efficiently samples the coefficients of a {\mu}-biased Fourier transform.Comment: 17 pages; v3 based on journal version; minor corrections and
clarification
On the hardness of learning sparse parities
This work investigates the hardness of computing sparse solutions to systems
of linear equations over F_2. Consider the k-EvenSet problem: given a
homogeneous system of linear equations over F_2 on n variables, decide if there
exists a nonzero solution of Hamming weight at most k (i.e. a k-sparse
solution). While there is a simple O(n^{k/2})-time algorithm for it,
establishing fixed parameter intractability for k-EvenSet has been a notorious
open problem. Towards this goal, we show that unless k-Clique can be solved in
n^{o(k)} time, k-EvenSet has no poly(n)2^{o(sqrt{k})} time algorithm and no
polynomial time algorithm when k = (log n)^{2+eta} for any eta > 0.
Our work also shows that the non-homogeneous generalization of the problem --
which we call k-VectorSum -- is W[1]-hard on instances where the number of
equations is O(k log n), improving on previous reductions which produced
Omega(n) equations. We also show that for any constant eps > 0, given a system
of O(exp(O(k))log n) linear equations, it is W[1]-hard to decide if there is a
k-sparse linear form satisfying all the equations or if every function on at
most k-variables (k-junta) satisfies at most (1/2 + eps)-fraction of the
equations. In the setting of computational learning, this shows hardness of
approximate non-proper learning of k-parities. In a similar vein, we use the
hardness of k-EvenSet to show that that for any constant d, unless k-Clique can
be solved in n^{o(k)} time there is no poly(m, n)2^{o(sqrt{k}) time algorithm
to decide whether a given set of m points in F_2^n satisfies: (i) there exists
a non-trivial k-sparse homogeneous linear form evaluating to 0 on all the
points, or (ii) any non-trivial degree d polynomial P supported on at most k
variables evaluates to zero on approx. Pr_{F_2^n}[P(z) = 0] fraction of the
points i.e., P is fooled by the set of points
- …