8 research outputs found

    Optimal quantum algorithm for polynomial interpolation

    Get PDF
    We consider the number of quantum queries required to determine the coefficients of a degree-d polynomial over GF(q). A lower bound shown independently by Kane and Kutin and by Meyer and Pommersheim shows that d/2+1/2 quantum queries are needed to solve this problem with bounded error, whereas an algorithm of Boneh and Zhandry shows that d quantum queries are sufficient. We show that the lower bound is achievable: d/2+1/2 quantum queries suffice to determine the polynomial with bounded error. Furthermore, we show that d/2+1 queries suffice to achieve probability approaching 1 for large q. These upper bounds improve results of Boneh and Zhandry on the insecurity of cryptographic protocols against quantum attacks. We also show that our algorithm's success probability as a function of the number of queries is precisely optimal. Furthermore, the algorithm can be implemented with gate complexity poly(log q) with negligible decrease in the success probability. We end with a conjecture about the quantum query complexity of multivariate polynomial interpolation.Comment: 17 pages, minor improvements, added conjecture about multivariate interpolatio

    Optimal Quantum Sample Complexity of Learning Algorithms

    Get PDF
    \newcommand{\eps}{\varepsilon} In learning theory, the VC dimension of a concept class CC is the most common way to measure its "richness." In the PAC model \Theta\Big(\frac{d}{\eps} + \frac{\log(1/\delta)}{\eps}\Big) examples are necessary and sufficient for a learner to output, with probability 1δ1-\delta, a hypothesis hh that is \eps-close to the target concept cc. In the related agnostic model, where the samples need not come from a cCc\in C, we know that \Theta\Big(\frac{d}{\eps^2} + \frac{\log(1/\delta)}{\eps^2}\Big) examples are necessary and sufficient to output an hypothesis hCh\in C whose error is at most \eps worse than the best concept in CC. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson, who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atici and Servedio, improved by Zhang, showed that in the PAC setting, quantum examples cannot be much more powerful: the required number of quantum examples is \Omega\Big(\frac{d^{1-\eta}}{\eps} + d + \frac{\log(1/\delta)}{\eps}\Big)\mbox{ for all }\eta> 0. Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a \log(d/\eps) factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the "Pretty Good Measurement" on the quantum state identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors.Comment: 31 pages LaTeX. Arxiv abstract shortened to fit in their 1920-character limit. Version 3: many small changes, no change in result

    Optimal quantum sample complexity of learning algorithms

    Get PDF

    Optimal quantum sample complexity of learning algorithms

    Get PDF
    In learning theory, the VC dimension of a concept class C is the most common way to measure its “richness.” A fundamental result says that the number of examples needed to learn an unknown target concept c∈C under an unknown distribution D, is tightly determined by the VC dimension d of the concept class C. Specifically, in the PAC model Θ(dϵ+log(1/δ)ϵ) examples are necessary and sufficient for a learner to output, with probability 1−δ, a hypothesis h that is ϵ-close to the target concept c (measured under D). In the related agnostic model, where the samples need not come from a c∈C, we know that Θ(dϵ2+log(1/δ)ϵ2) examples are necessary and sufficient to output an hypothesis h∈C whose error is at most ϵ worse than the error of the best concept in C. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson (1999), who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atıcı and Servedio (2005), improved by Zhang (2010), showed that in the PAC setting (where the learner has to succeed for every distribution), quantum examples cannot be much more powerful: the required number of quantum examples is Ω(d1−ηϵ+d+log(1/δ)ϵ) for arbitrarily small constant η>0. Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two proof approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a log(d/ϵ) factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the “Pretty Good Measurement” on the quantum state-identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors for every concept class C

    The quantum query complexity of learning multilinear polynomials

    No full text
    corecore