5 research outputs found
Learning DNFs under product distributions via {\mu}-biased quantum Fourier sampling
We show that DNF formulae can be quantum PAC-learned in polynomial time under
product distributions using a quantum example oracle. The best classical
algorithm (without access to membership queries) runs in superpolynomial time.
Our result extends the work by Bshouty and Jackson (1998) that proved that DNF
formulae are efficiently learnable under the uniform distribution using a
quantum example oracle. Our proof is based on a new quantum algorithm that
efficiently samples the coefficients of a {\mu}-biased Fourier transform.Comment: 17 pages; v3 based on journal version; minor corrections and
clarification
Optimal Quantum Sample Complexity of Learning Algorithms
In learning theory, the VC dimension of a
concept class is the most common way to measure its "richness." In the PAC
model \Theta\Big(\frac{d}{\eps} + \frac{\log(1/\delta)}{\eps}\Big)
examples are necessary and sufficient for a learner to output, with probability
, a hypothesis that is \eps-close to the target concept . In
the related agnostic model, where the samples need not come from a , we
know that \Theta\Big(\frac{d}{\eps^2} + \frac{\log(1/\delta)}{\eps^2}\Big)
examples are necessary and sufficient to output an hypothesis whose
error is at most \eps worse than the best concept in .
Here we analyze quantum sample complexity, where each example is a coherent
quantum state. This model was introduced by Bshouty and Jackson, who showed
that quantum examples are more powerful than classical examples in some
fixed-distribution settings. However, Atici and Servedio, improved by Zhang,
showed that in the PAC setting, quantum examples cannot be much more powerful:
the required number of quantum examples is
\Omega\Big(\frac{d^{1-\eta}}{\eps} + d + \frac{\log(1/\delta)}{\eps}\Big)\mbox{
for all }\eta> 0. Our main result is that quantum and classical sample
complexity are in fact equal up to constant factors in both the PAC and
agnostic models. We give two approaches. The first is a fairly simple
information-theoretic argument that yields the above two classical bounds and
yields the same bounds for quantum sample complexity up to a \log(d/\eps)
factor. We then give a second approach that avoids the log-factor loss, based
on analyzing the behavior of the "Pretty Good Measurement" on the quantum state
identification problems that correspond to learning. This shows classical and
quantum sample complexity are equal up to constant factors.Comment: 31 pages LaTeX. Arxiv abstract shortened to fit in their
1920-character limit. Version 3: many small changes, no change in result
Optimal quantum sample complexity of learning algorithms
In learning theory, the VC dimension of a concept class C is the most common way to measure its “richness.” A fundamental result says that the number of examples needed to learn an unknown target concept c∈C under an unknown distribution D, is tightly determined by the VC dimension d of the concept class C. Specifically, in the PAC model
Θ(dϵ+log(1/δ)ϵ)
examples are necessary and sufficient for a learner to output, with probability 1−δ, a hypothesis h that is ϵ-close to the target concept c (measured under D). In the related agnostic model, where the samples need not come from a c∈C, we know that
Θ(dϵ2+log(1/δ)ϵ2)
examples are necessary and sufficient to output an hypothesis h∈C whose error is at most ϵ worse than the error of the best concept in C. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson (1999), who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atıcı and Servedio (2005), improved by Zhang (2010), showed that in the PAC setting (where the learner has to succeed for every distribution), quantum examples cannot be much more powerful: the required number of quantum examples is
Ω(d1−ηϵ+d+log(1/δ)ϵ) for arbitrarily small constant η>0.
Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two proof approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a log(d/ϵ) factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the “Pretty Good Measurement” on the quantum state-identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors for every concept class C