66 research outputs found
Large-Scale Simulation of Shor's Quantum Factoring Algorithm
Shor's factoring algorithm is one of the most anticipated applications of
quantum computing. However, the limited capabilities of today's quantum
computers only permit a study of Shor's algorithm for very small numbers. Here
we show how large GPU-based supercomputers can be used to assess the
performance of Shor's algorithm for numbers that are out of reach for current
and near-term quantum hardware. First, we study Shor's original factoring
algorithm. While theoretical bounds suggest success probabilities of only 3-4
%, we find average success probabilities above 50 %, due to a high frequency of
"lucky" cases, defined as successful factorizations despite unmet sufficient
conditions. Second, we investigate a powerful post-processing procedure, by
which the success probability can be brought arbitrarily close to one, with
only a single run of Shor's quantum algorithm. Finally, we study the
effectiveness of this post-processing procedure in the presence of typical
errors in quantum processing hardware. We find that the quantum factoring
algorithm exhibits a particular form of universality and resilience against the
different types of errors. The largest semiprime that we have factored by
executing Shor's algorithm on a GPU-based supercomputer, without exploiting
prior knowledge of the solution, is 549755813701 = 712321 * 771781. We put
forward the challenge of factoring, without oversimplification, a non-trivial
semiprime larger than this number on any quantum computing device.Comment: differs from the published version in formatting and style; open
source code available at https://jugit.fz-juelich.de/qip/shorgp
Random State Technology
We review and extend, in a self-contained way, the mathematical foundations
of numerical simulation methods that are based on the use of random states. The
power and versatility of this simulation technology is illustrated by
calculations of physically relevant properties such as the density of states of
large single particle systems, the specific heat, current-current correlations,
density-density correlations, and electron spin resonance spectra of many-body
systems. We explore a new field of applications of the random state technology
by showing that it can be used to analyze numerical simulations and experiments
that aim to realize quantum supremacy on a noisy intermediate-scale quantum
processor. Additionally, we show that concepts of the random state technology
prove useful in quantum information theory
Benchmarking gate-based quantum computers
With the advent of public access to small gate-based quantum processors, it
becomes necessary to develop a benchmarking methodology such that independent
researchers can validate the operation of these processors. We explore the
usefulness of a number of simple quantum circuits as benchmarks for gate-based
quantum computing devices and show that circuits performing identity operations
are very simple, scalable and sensitive to gate errors and are therefore very
well suited for this task. We illustrate the procedure by presenting benchmark
results for the IBM Quantum Experience, a cloud-based platform for gate-based
quantum computing.Comment: Accepted for publication in Computer Physics Communication
Hybrid Quantum Classical Simulations
We report on two major hybrid applications of quantum computing, namely, the
quantum approximate optimisation algorithm (QAOA) and the variational quantum
eigensolver (VQE). Both are hybrid quantum classical algorithms as they require
incremental communication between a classical central processing unit and a
quantum processing unit to solve a problem. We find that the QAOA scales much
better to larger problems than random guessing, but requires significant
computational resources. In contrast, a coarsely discretised version of quantum
annealing called approximate quantum annealing (AQA) can reach the same
promising scaling behaviour using much less computational resources. For the
VQE, we find reasonable results in approximating the ground state energy of the
Heisenberg model when suitable choices of initial states and parameters are
used. Our design and implementation of a general quasi-dynamical evolution
further improves these results.Comment: This article is a book contribution. The book is freely available at
http://hdl.handle.net/2128/3184
Benchmarking Advantage and D-Wave 2000Q quantum annealers with exact cover problems
We benchmark the quantum processing units of the largest quantum annealers to
date, the 5000+ qubit quantum annealer Advantage and its 2000+ qubit
predecessor D-Wave 2000Q, using tail assignment and exact cover problems from
aircraft scheduling scenarios. The benchmark set contains small, intermediate,
and large problems with both sparsely connected and almost fully connected
instances. We find that Advantage outperforms D-Wave 2000Q for almost all
problems, with a notable increase in success rate and problem size. In
particular, Advantage is also able to solve the largest problems with 120
logical qubits that D-Wave 2000Q cannot solve anymore. Furthermore, problems
that can still be solved by D-Wave 2000Q are solved faster by Advantage. We
find, however, that D-Wave 2000Q can achieve better success rates for sparsely
connected problems that do not require the many new couplers present on
Advantage, so improving the connectivity of a quantum annealer does not per se
improve its performance.Comment: new experiments to test the conjecture about unused couplers
(appendix B
Massively parallel quantum computer simulator, eleven years later
A revised version of the massively parallel simulator of a universal quantum
computer, described in this journal eleven years ago, is used to benchmark
various gate-based quantum algorithms on some of the most powerful
supercomputers that exist today. Adaptive encoding of the wave function reduces
the memory requirement by a factor of eight, making it possible to simulate
universal quantum computers with up to 48 qubits on the Sunway TaihuLight and
on the K computer. The simulator exhibits close-to-ideal weak-scaling behavior
on the Sunway TaihuLight,on the K computer, on an IBM Blue Gene/Q, and on Intel
Xeon based clusters, implying that the combination of parallelization and
hardware can track the exponential scaling due to the increasing number of
qubits. Results of executing simple quantum circuits and Shor's factorization
algorithm on quantum computers containing up to 48 qubits are presented.Comment: Substantially rewritten + new data. Published in Computer Physics
Communicatio
Model-free inequality for data of Einstein-Podolsky-Rosen-Bohm experiments
We present a new inequality constraining correlations obtained when
performing Einstein-Podolsky-Rosen-Bohm experiments. The proof does not rely on
mathematical models that are imagined to have produced the data and is
therefore ``model-free''. The new inequality contains the model-free version of
the well-known Bell-CHSH inequality as a special case. A violation of the
latter implies that not all the data pairs in four data sets can be reshuffled
to create quadruples. This conclusion provides a new perspective on the
implications of the violation of Bell-type inequalities by experimental data.Comment: Extended version of Annals of Physics, Volume 453, 169314, 2023
(https://doi.org/10.1016/j.aop.2023.169314
Einstein–Podolsky–Rosen–Bohm experiments:A discrete data driven approach
We take the point of view that building a one-way bridge from experimental data to mathematical models instead of the other way around avoids running into controversies resulting from attaching meaning to the symbols used in the latter. In particular, we show that adopting this view offers new perspectives for constructing mathematical models for and interpreting the results of Einstein–Podolsky–Rosen–Bohm experiments. We first prove new Bell-type inequalities constraining the values of the four correlations obtained by performing Einstein–Podolsky–Rosen–Bohm experiments under four different conditions. The proof is “model-free” in the sense that it does not refer to any mathematical model that one imagines to have produced the data. The constraints only depend on the number of quadruples obtained by reshuffling the data in the four data sets without changing the values of the correlations. These new inequalities reduce to model-free versions of the well-known Bell-type inequalities if the maximum fraction of quadruples is equal to one. Being model-free, a violation of the latter by experimental data implies that not all the data in the four data sets can be reshuffled to form quadruples. Furthermore, being model-free inequalities, a violation of the latter by experimental data only implies that any mathematical model assumed to produce this data does not apply. Starting from the data obtained by performing Einstein–Podolsky–Rosen–Bohm experiments, we construct instead of postulate mathematical models that describe the main features of these data. The mathematical framework of plausible reasoning is applied to reproducible and robust data, yielding without using any concept of quantum theory, the expression of the correlation for a system of two spin-1/2 objects in the singlet state. Next, we apply Bell's theorem to the Stern–Gerlach experiment and demonstrate how the requirement of separability leads to the quantum-theoretical description of the averages and correlations obtained from an Einstein–Podolsky–Rosen–Bohm experiment. We analyze the data of an Einstein–Podolsky–Rosen–Bohm experiment and debunk the popular statement that Einstein–Podolsky–Rosen–Bohm experiments have vindicated quantum theory. We argue that it is not quantum theory but the processing of data from EPRB experiments that should be questioned. We perform Einstein–Podolsky–Rosen–Bohm experiments on a superconducting quantum information processor to show that the event-by-event generation of discrete data can yield results that are in good agreement with the quantum-theoretical description of the Einstein–Podolsky–Rosen–Bohm thought experiment. We demonstrate that a stochastic and a subquantum model can also produce data that are in excellent agreement with the quantum-theoretical description of the Einstein–Podolsky–Rosen–Bohm thought experiment.</p
Einstein-Podolsky-Rosen-Bohm experiments: a discrete data driven approach
We take the point of view that building a one-way bridge from experimental
data to mathematical models instead of the other way around avoids running into
controversies resulting from attaching meaning to the symbols used in the
latter. In particular, we show that adopting this view offers new perspectives
for constructing mathematical models for and interpreting the results of
Einstein-Podolsky-Rosen-Bohm experiments. We first prove new Bell-type
inequalities constraining the values of the four correlations obtained by
performing Einstein-Podolsky-Rosen-Bohm experiments under four different
conditions. The proof is ``model-free'' in the sense that it does not refer to
any mathematical model that one imagines to have produced the data. The
constraints only depend on the number of quadruples obtained by reshuffling the
data in the four data sets without changing the values of the correlations.
These new inequalities reduce to model-free versions of the well-known
Bell-type inequalities if the maximum fraction of quadruples is equal to one.
Being model-free, a violation of the latter by experimental data implies that
not all the data in the four data sets can be reshuffled to form quadruples.
Furthermore, being model-free inequalities, a violation of the latter by
experimental data only implies that any mathematical model assumed to produce
this data does not apply. Starting from the data obtained by performing
Einstein-Podolsky-Rosen-Bohm experiments, we construct instead of postulate
mathematical models that describe the main features of these data. The
mathematical framework of plausible reasoning is applied to reproducible and
robust data, yielding without using any concept of quantum theory, the
expression of the correlation for a system of two spin-1/2 objects in the
singlet state. (truncated here
Einstein–Podolsky–Rosen–Bohm experiments:A discrete data driven approach
We take the point of view that building a one-way bridge from experimental data to mathematical models instead of the other way around avoids running into controversies resulting from attaching meaning to the symbols used in the latter. In particular, we show that adopting this view offers new perspectives for constructing mathematical models for and interpreting the results of Einstein–Podolsky–Rosen–Bohm experiments. We first prove new Bell-type inequalities constraining the values of the four correlations obtained by performing Einstein–Podolsky–Rosen–Bohm experiments under four different conditions. The proof is “model-free” in the sense that it does not refer to any mathematical model that one imagines to have produced the data. The constraints only depend on the number of quadruples obtained by reshuffling the data in the four data sets without changing the values of the correlations. These new inequalities reduce to model-free versions of the well-known Bell-type inequalities if the maximum fraction of quadruples is equal to one. Being model-free, a violation of the latter by experimental data implies that not all the data in the four data sets can be reshuffled to form quadruples. Furthermore, being model-free inequalities, a violation of the latter by experimental data only implies that any mathematical model assumed to produce this data does not apply. Starting from the data obtained by performing Einstein–Podolsky–Rosen–Bohm experiments, we construct instead of postulate mathematical models that describe the main features of these data. The mathematical framework of plausible reasoning is applied to reproducible and robust data, yielding without using any concept of quantum theory, the expression of the correlation for a system of two spin-1/2 objects in the singlet state. Next, we apply Bell's theorem to the Stern–Gerlach experiment and demonstrate how the requirement of separability leads to the quantum-theoretical description of the averages and correlations obtained from an Einstein–Podolsky–Rosen–Bohm experiment. We analyze the data of an Einstein–Podolsky–Rosen–Bohm experiment and debunk the popular statement that Einstein–Podolsky–Rosen–Bohm experiments have vindicated quantum theory. We argue that it is not quantum theory but the processing of data from EPRB experiments that should be questioned. We perform Einstein–Podolsky–Rosen–Bohm experiments on a superconducting quantum information processor to show that the event-by-event generation of discrete data can yield results that are in good agreement with the quantum-theoretical description of the Einstein–Podolsky–Rosen–Bohm thought experiment. We demonstrate that a stochastic and a subquantum model can also produce data that are in excellent agreement with the quantum-theoretical description of the Einstein–Podolsky–Rosen–Bohm thought experiment.</p
- …