60 research outputs found

    Noise-Tolerant Learning, the Parity Problem, and the Statistical Query Model

    Full text link
    We describe a slightly sub-exponential time algorithm for learning parity functions in the presence of random classification noise. This results in a polynomial-time algorithm for the case of parity functions that depend on only the first O(log n log log n) bits of input. This is the first known instance of an efficient noise-tolerant algorithm for a concept class that is provably not learnable in the Statistical Query model of Kearns. Thus, we demonstrate that the set of problems learnable in the statistical query model is a strict subset of those problems learnable in the presence of noise in the PAC model. In coding-theory terms, what we give is a poly(n)-time algorithm for decoding linear k by n codes in the presence of random noise for the case of k = c log n loglog n for some c > 0. (The case of k = O(log n) is trivial since one can just individually check each of the 2^k possible messages and choose the one that yields the closest codeword.) A natural extension of the statistical query model is to allow queries about statistical properties that involve t-tuples of examples (as opposed to single examples). The second result of this paper is to show that any class of functions learnable (strongly or weakly) with t-wise queries for t = O(log n) is also weakly learnable with standard unary queries. Hence this natural extension to the statistical query model does not increase the set of weakly learnable functions

    Learning DNFs under product distributions via {\mu}-biased quantum Fourier sampling

    Full text link
    We show that DNF formulae can be quantum PAC-learned in polynomial time under product distributions using a quantum example oracle. The best classical algorithm (without access to membership queries) runs in superpolynomial time. Our result extends the work by Bshouty and Jackson (1998) that proved that DNF formulae are efficiently learnable under the uniform distribution using a quantum example oracle. Our proof is based on a new quantum algorithm that efficiently samples the coefficients of a {\mu}-biased Fourier transform.Comment: 17 pages; v3 based on journal version; minor corrections and clarification

    On Statistical Query Sampling and NMR Quantum Computing

    Full text link
    We introduce a ``Statistical Query Sampling'' model, in which the goal of an algorithm is to produce an element in a hidden set SsubseteqbitnSsubseteqbit^n with reasonable probability. The algorithm gains information about SS through oracle calls (statistical queries), where the algorithm submits a query function g(cdot)g(cdot) and receives an approximation to PrxinS[g(x)=1]Pr_{x in S}[g(x)=1]. We show how this model is related to NMR quantum computing, in which only statistical properties of an ensemble of quantum systems can be measured, and in particular to the question of whether one can translate standard quantum algorithms to the NMR setting without putting all of their classical post-processing into the quantum system. Using Fourier analysis techniques developed in the related context of {em statistical query learning}, we prove a number of lower bounds (both information-theoretic and cryptographic) on the ability of algorithms to produces an xinSxin S, even when the set SS is fairly simple. These lower bounds point out a difficulty in efficiently applying NMR quantum computing to algorithms such as Shor's and Simon's algorithm that involve significant classical post-processing. We also explicitly relate the notion of statistical query sampling to that of statistical query learning. An extended abstract appeared in the 18th Aunnual IEEE Conference of Computational Complexity (CCC 2003), 2003. Keywords: statistical query, NMR quantum computing, lower boundComment: 17 pages, no figures. Appeared in 18th Aunnual IEEE Conference of Computational Complexity (CCC 2003

    Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness

    Get PDF
    Polynomial approximations to boolean functions have led to many positive results in computer science. In particular, polynomial approximations to the sign function underly algorithms for agnostically learning halfspaces, as well as pseudorandom generators for halfspaces. In this work, we investigate the limits of these techniques by proving inapproximability results for the sign function. Firstly, the polynomial regression algorithm of Kalai et al. (SIAM J. Comput. 2008) shows that halfspaces can be learned with respect to log-concave distributions on Rn\mathbb{R}^n in the challenging agnostic learning model. The power of this algorithm relies on the fact that under log-concave distributions, halfspaces can be approximated arbitrarily well by low-degree polynomials. We ask whether this technique can be extended beyond log-concave distributions, and establish a negative result. We show that polynomials of any degree cannot approximate the sign function to within arbitrarily low error for a large class of non-log-concave distributions on the real line, including those with densities proportional to exp(x0.99)\exp(-|x|^{0.99}). Secondly, we investigate the derandomization of Chernoff-type concentration inequalities. Chernoff-type tail bounds on sums of independent random variables have pervasive applications in theoretical computer science. Schmidt et al. (SIAM J. Discrete Math. 1995) showed that these inequalities can be established for sums of random variables with only O(log(1/δ))O(\log(1/\delta))-wise independence, for a tail probability of δ\delta. We show that their results are tight up to constant factors. These results rely on techniques from weighted approximation theory, which studies how well functions on the real line can be approximated by polynomials under various distributions. We believe that these techniques will have further applications in other areas of computer science.Comment: 22 page

    A Quantum Computational Learning Algorithm

    Full text link
    An interesting classical result due to Jackson allows polynomial-time learning of the function class DNF using membership queries. Since in most practical learning situations access to a membership oracle is unrealistic, this paper explores the possibility that quantum computation might allow a learning algorithm for DNF that relies only on example queries. A natural extension of Fourier-based learning into the quantum domain is presented. The algorithm requires only an example oracle, and it runs in O(sqrt(2^n)) time, a result that appears to be classically impossible. The algorithm is unique among quantum algorithms in that it does not assume a priori knowledge of a function and does not operate on a superposition that includes all possible states.Comment: This is a reworked and improved version of a paper originally entitled "Quantum Harmonic Sieve: Learning DNF Using a Classical Example Oracle
    corecore