82 research outputs found
Exponential separations between classical and quantum learners
Despite significant effort, the quantum machine learning community has only
demonstrated quantum learning advantages for artificial cryptography-inspired
datasets when dealing with classical data. In this paper we address the
challenge of finding learning problems where quantum learning algorithms can
achieve a provable exponential speedup over classical learning algorithms. We
reflect on computational learning theory concepts related to this question and
discuss how subtle differences in definitions can result in significantly
different requirements and tasks for the learner to meet and solve. We examine
existing learning problems with provable quantum speedups and find that they
largely rely on the classical hardness of evaluating the function that
generates the data, rather than identifying it. To address this, we present two
new learning separations where the classical difficulty primarily lies in
identifying the function generating the data. Furthermore, we explore
computational hardness assumptions that can be leveraged to prove quantum
speedups in scenarios where data is quantum-generated, which implies likely
quantum advantages in a plethora of more natural settings (e.g., in condensed
matter and high energy physics). We also discuss the limitations of the
classical shadow paradigm in the context of learning separations, and how
physically-motivated settings such as characterizing phases of matter and
Hamiltonian learning fit in the computational learning framework.Comment: this article supersedes arXiv:2208.0633
A Survey of Quantum Learning Theory
This paper surveys quantum learning theory: the theoretical aspects of
machine learning using quantum computers. We describe the main results known
for three models of learning: exact learning from membership queries, and
Probably Approximately Correct (PAC) and agnostic learning from classical or
quantum examples.Comment: 26 pages LaTeX. v2: many small changes to improve the presentation.
This version will appear as Complexity Theory Column in SIGACT News in June
2017. v3: fixed a small ambiguity in the definition of gamma(C) and updated a
referenc
Nonnegative/binary matrix factorization with a D-Wave quantum annealer
D-Wave quantum annealers represent a novel computational architecture and
have attracted significant interest, but have been used for few real-world
computations. Machine learning has been identified as an area where quantum
annealing may be useful. Here, we show that the D-Wave 2X can be effectively
used as part of an unsupervised machine learning method. This method can be
used to analyze large datasets. The D-Wave only limits the number of features
that can be extracted from the dataset. We apply this method to learn the
features from a set of facial images
Shadows of quantum machine learning
Quantum machine learning is often highlighted as one of the most promising practical applications for which quantum computers could provide a computational advantage. However, a major obstacle to the widespread use of quantum machine learning models in practice is that these models, even once trained, still require access to a quantum computer in order to be evaluated on new data. To solve this issue, we introduce a class of quantum models where quantum resources are only required during training, while the deployment of the trained model is classical. Specifically, the training phase of our models ends with the generation of a ‘shadow model’ from which the classical deployment becomes possible. We prove that: (i) this class of models is universal for classically-deployed quantum machine learning; (ii) it does have restricted learning capacities compared to ‘fully quantum’ models, but nonetheless (iii) it achieves a provable learning advantage over fully classical learners, contingent on widely believed assumptions in complexity theory. These results provide compelling evidence that quantum machine learning can confer learning advantages across a substantially broader range of scenarios, where quantum computers are exclusively employed during the training phase. By enabling classical deployment, our approach facilitates the implementation of quantum machine learning models in various practical contexts
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
Optimal Quantum Sample Complexity of Learning Algorithms
In learning theory, the VC dimension of a
concept class is the most common way to measure its "richness." In the PAC
model \Theta\Big(\frac{d}{\eps} + \frac{\log(1/\delta)}{\eps}\Big)
examples are necessary and sufficient for a learner to output, with probability
, a hypothesis that is \eps-close to the target concept . In
the related agnostic model, where the samples need not come from a , we
know that \Theta\Big(\frac{d}{\eps^2} + \frac{\log(1/\delta)}{\eps^2}\Big)
examples are necessary and sufficient to output an hypothesis whose
error is at most \eps worse than the best concept in .
Here we analyze quantum sample complexity, where each example is a coherent
quantum state. This model was introduced by Bshouty and Jackson, who showed
that quantum examples are more powerful than classical examples in some
fixed-distribution settings. However, Atici and Servedio, improved by Zhang,
showed that in the PAC setting, quantum examples cannot be much more powerful:
the required number of quantum examples is
\Omega\Big(\frac{d^{1-\eta}}{\eps} + d + \frac{\log(1/\delta)}{\eps}\Big)\mbox{
for all }\eta> 0. Our main result is that quantum and classical sample
complexity are in fact equal up to constant factors in both the PAC and
agnostic models. We give two approaches. The first is a fairly simple
information-theoretic argument that yields the above two classical bounds and
yields the same bounds for quantum sample complexity up to a \log(d/\eps)
factor. We then give a second approach that avoids the log-factor loss, based
on analyzing the behavior of the "Pretty Good Measurement" on the quantum state
identification problems that correspond to learning. This shows classical and
quantum sample complexity are equal up to constant factors.Comment: 31 pages LaTeX. Arxiv abstract shortened to fit in their
1920-character limit. Version 3: many small changes, no change in result
Certainty and Uncertainty in Quantum Information Processing
This survey, aimed at information processing researchers, highlights
intriguing but lesser known results, corrects misconceptions, and suggests
research areas. Themes include: certainty in quantum algorithms; the "fewer
worlds" theory of quantum mechanics; quantum learning; probability theory
versus quantum mechanics.Comment: Invited paper accompanying invited talk to AAAI Spring Symposium
2007. Comments, corrections, and suggestions would be most welcom
- …