45,666 research outputs found
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
Quantum Computing in the NISQ era and beyond
Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the
near future. Quantum computers with 50-100 qubits may be able to perform tasks
which surpass the capabilities of today's classical digital computers, but
noise in quantum gates will limit the size of quantum circuits that can be
executed reliably. NISQ devices will be useful tools for exploring many-body
quantum physics, and may have other useful applications, but the 100-qubit
quantum computer will not change the world right away --- we should regard it
as a significant step toward the more powerful quantum technologies of the
future. Quantum technologists should continue to strive for more accurate
quantum gates and, eventually, fully fault-tolerant quantum computing.Comment: 20 pages. Based on a Keynote Address at Quantum Computing for
Business, 5 December 2017. (v3) Formatted for publication in Quantum, minor
revision
Quantum Annealing - Foundations and Frontiers
We briefly review various computational methods for the solution of
optimization problems. First, several classical methods such as Metropolis
algorithm and simulated annealing are discussed. We continue with a description
of quantum methods, namely adiabatic quantum computation and quantum annealing.
Next, the new D-Wave computer and the recent progress in the field claimed by
the D-Wave group are discussed. We present a set of criteria which could help
in testing the quantum features of these computers. We conclude with a list of
considerations with regard to future research.Comment: 22 pages, 6 figures. EPJ-ST Discussion and Debate Issue: Quantum
Annealing: The fastest route to large scale quantum computation?, Eds. A.
Das, S. Suzuki (2014
Cut Size Statistics of Graph Bisection Heuristics
We investigate the statistical properties of cut sizes generated by heuristic
algorithms which solve approximately the graph bisection problem. On an
ensemble of sparse random graphs, we find empirically that the distribution of
the cut sizes found by ``local'' algorithms becomes peaked as the number of
vertices in the graphs becomes large. Evidence is given that this distribution
tends towards a Gaussian whose mean and variance scales linearly with the
number of vertices of the graphs. Given the distribution of cut sizes
associated with each heuristic, we provide a ranking procedure which takes into
account both the quality of the solutions and the speed of the algorithms. This
procedure is demonstrated for a selection of local graph bisection heuristics.Comment: 17 pages, 5 figures, submitted to SIAM Journal on Optimization also
available at http://ipnweb.in2p3.fr/~martin
Physical consequences of PNP and the DMRG-annealing conjecture
Computational complexity theory contains a corpus of theorems and conjectures
regarding the time a Turing machine will need to solve certain types of
problems as a function of the input size. Nature {\em need not} be a Turing
machine and, thus, these theorems do not apply directly to it. But {\em
classical simulations} of physical processes are programs running on Turing
machines and, as such, are subject to them. In this work, computational
complexity theory is applied to classical simulations of systems performing an
adiabatic quantum computation (AQC), based on an annealed extension of the
density matrix renormalization group (DMRG). We conjecture that the
computational time required for those classical simulations is controlled
solely by the {\em maximal entanglement} found during the process. Thus, lower
bounds on the growth of entanglement with the system size can be provided. In
some cases, quantum phase transitions can be predicted to take place in certain
inhomogeneous systems. Concretely, physical conclusions are drawn from the
assumption that the complexity classes {\bf P} and {\bf NP} differ. As a
by-product, an alternative measure of entanglement is proposed which, via
Chebyshev's inequality, allows to establish strict bounds on the required
computational time.Comment: Accepted for publication in JSTA
- …