83 research outputs found

    Large-Scale Simulation of Shor's Quantum Factoring Algorithm

    Get PDF
    Shor's factoring algorithm is one of the most anticipated applications of quantum computing. However, the limited capabilities of today's quantum computers only permit a study of Shor's algorithm for very small numbers. Here we show how large GPU-based supercomputers can be used to assess the performance of Shor's algorithm for numbers that are out of reach for current and near-term quantum hardware. First, we study Shor's original factoring algorithm. While theoretical bounds suggest success probabilities of only 3-4 %, we find average success probabilities above 50 %, due to a high frequency of "lucky" cases, defined as successful factorizations despite unmet sufficient conditions. Second, we investigate a powerful post-processing procedure, by which the success probability can be brought arbitrarily close to one, with only a single run of Shor's quantum algorithm. Finally, we study the effectiveness of this post-processing procedure in the presence of typical errors in quantum processing hardware. We find that the quantum factoring algorithm exhibits a particular form of universality and resilience against the different types of errors. The largest semiprime that we have factored by executing Shor's algorithm on a GPU-based supercomputer, without exploiting prior knowledge of the solution, is 549755813701 = 712321 * 771781. We put forward the challenge of factoring, without oversimplification, a non-trivial semiprime larger than this number on any quantum computing device.Comment: differs from the published version in formatting and style; open source code available at https://jugit.fz-juelich.de/qip/shorgp

    Random State Technology

    Get PDF
    We review and extend, in a self-contained way, the mathematical foundations of numerical simulation methods that are based on the use of random states. The power and versatility of this simulation technology is illustrated by calculations of physically relevant properties such as the density of states of large single particle systems, the specific heat, current-current correlations, density-density correlations, and electron spin resonance spectra of many-body systems. We explore a new field of applications of the random state technology by showing that it can be used to analyze numerical simulations and experiments that aim to realize quantum supremacy on a noisy intermediate-scale quantum processor. Additionally, we show that concepts of the random state technology prove useful in quantum information theory

    Benchmarking gate-based quantum computers

    Get PDF
    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.Comment: Accepted for publication in Computer Physics Communication

    Hybrid Quantum Classical Simulations

    Full text link
    We report on two major hybrid applications of quantum computing, namely, the quantum approximate optimisation algorithm (QAOA) and the variational quantum eigensolver (VQE). Both are hybrid quantum classical algorithms as they require incremental communication between a classical central processing unit and a quantum processing unit to solve a problem. We find that the QAOA scales much better to larger problems than random guessing, but requires significant computational resources. In contrast, a coarsely discretised version of quantum annealing called approximate quantum annealing (AQA) can reach the same promising scaling behaviour using much less computational resources. For the VQE, we find reasonable results in approximating the ground state energy of the Heisenberg model when suitable choices of initial states and parameters are used. Our design and implementation of a general quasi-dynamical evolution further improves these results.Comment: This article is a book contribution. The book is freely available at http://hdl.handle.net/2128/3184

    Benchmarking Advantage and D-Wave 2000Q quantum annealers with exact cover problems

    Get PDF
    We benchmark the quantum processing units of the largest quantum annealers to date, the 5000+ qubit quantum annealer Advantage and its 2000+ qubit predecessor D-Wave 2000Q, using tail assignment and exact cover problems from aircraft scheduling scenarios. The benchmark set contains small, intermediate, and large problems with both sparsely connected and almost fully connected instances. We find that Advantage outperforms D-Wave 2000Q for almost all problems, with a notable increase in success rate and problem size. In particular, Advantage is also able to solve the largest problems with 120 logical qubits that D-Wave 2000Q cannot solve anymore. Furthermore, problems that can still be solved by D-Wave 2000Q are solved faster by Advantage. We find, however, that D-Wave 2000Q can achieve better success rates for sparsely connected problems that do not require the many new couplers present on Advantage, so improving the connectivity of a quantum annealer does not per se improve its performance.Comment: new experiments to test the conjecture about unused couplers (appendix B

    Model-free inequality for data of Einstein-Podolsky-Rosen-Bohm experiments

    Full text link
    We present a new inequality constraining correlations obtained when performing Einstein-Podolsky-Rosen-Bohm experiments. The proof does not rely on mathematical models that are imagined to have produced the data and is therefore ``model-free''. The new inequality contains the model-free version of the well-known Bell-CHSH inequality as a special case. A violation of the latter implies that not all the data pairs in four data sets can be reshuffled to create quadruples. This conclusion provides a new perspective on the implications of the violation of Bell-type inequalities by experimental data.Comment: Extended version of Annals of Physics, Volume 453, 169314, 2023 (https://doi.org/10.1016/j.aop.2023.169314

    Einstein-Podolsky-Rosen-Bohm experiments: a discrete data driven approach

    Full text link
    We take the point of view that building a one-way bridge from experimental data to mathematical models instead of the other way around avoids running into controversies resulting from attaching meaning to the symbols used in the latter. In particular, we show that adopting this view offers new perspectives for constructing mathematical models for and interpreting the results of Einstein-Podolsky-Rosen-Bohm experiments. We first prove new Bell-type inequalities constraining the values of the four correlations obtained by performing Einstein-Podolsky-Rosen-Bohm experiments under four different conditions. The proof is ``model-free'' in the sense that it does not refer to any mathematical model that one imagines to have produced the data. The constraints only depend on the number of quadruples obtained by reshuffling the data in the four data sets without changing the values of the correlations. These new inequalities reduce to model-free versions of the well-known Bell-type inequalities if the maximum fraction of quadruples is equal to one. Being model-free, a violation of the latter by experimental data implies that not all the data in the four data sets can be reshuffled to form quadruples. Furthermore, being model-free inequalities, a violation of the latter by experimental data only implies that any mathematical model assumed to produce this data does not apply. Starting from the data obtained by performing Einstein-Podolsky-Rosen-Bohm experiments, we construct instead of postulate mathematical models that describe the main features of these data. The mathematical framework of plausible reasoning is applied to reproducible and robust data, yielding without using any concept of quantum theory, the expression of the correlation for a system of two spin-1/2 objects in the singlet state. (truncated here

    Massively parallel quantum computer simulator, eleven years later

    Get PDF
    A revised version of the massively parallel simulator of a universal quantum computer, described in this journal eleven years ago, is used to benchmark various gate-based quantum algorithms on some of the most powerful supercomputers that exist today. Adaptive encoding of the wave function reduces the memory requirement by a factor of eight, making it possible to simulate universal quantum computers with up to 48 qubits on the Sunway TaihuLight and on the K computer. The simulator exhibits close-to-ideal weak-scaling behavior on the Sunway TaihuLight,on the K computer, on an IBM Blue Gene/Q, and on Intel Xeon based clusters, implying that the combination of parallelization and hardware can track the exponential scaling due to the increasing number of qubits. Results of executing simple quantum circuits and Shor's factorization algorithm on quantum computers containing up to 48 qubits are presented.Comment: Substantially rewritten + new data. Published in Computer Physics Communicatio

    Einstein–Podolsky–Rosen–Bohm experiments:A discrete data driven approach

    Get PDF
    We take the point of view that building a one-way bridge from experimental data to mathematical models instead of the other way around avoids running into controversies resulting from attaching meaning to the symbols used in the latter. In particular, we show that adopting this view offers new perspectives for constructing mathematical models for and interpreting the results of Einstein–Podolsky–Rosen–Bohm experiments. We first prove new Bell-type inequalities constraining the values of the four correlations obtained by performing Einstein–Podolsky–Rosen–Bohm experiments under four different conditions. The proof is “model-free” in the sense that it does not refer to any mathematical model that one imagines to have produced the data. The constraints only depend on the number of quadruples obtained by reshuffling the data in the four data sets without changing the values of the correlations. These new inequalities reduce to model-free versions of the well-known Bell-type inequalities if the maximum fraction of quadruples is equal to one. Being model-free, a violation of the latter by experimental data implies that not all the data in the four data sets can be reshuffled to form quadruples. Furthermore, being model-free inequalities, a violation of the latter by experimental data only implies that any mathematical model assumed to produce this data does not apply. Starting from the data obtained by performing Einstein–Podolsky–Rosen–Bohm experiments, we construct instead of postulate mathematical models that describe the main features of these data. The mathematical framework of plausible reasoning is applied to reproducible and robust data, yielding without using any concept of quantum theory, the expression of the correlation for a system of two spin-1/2 objects in the singlet state. Next, we apply Bell's theorem to the Stern–Gerlach experiment and demonstrate how the requirement of separability leads to the quantum-theoretical description of the averages and correlations obtained from an Einstein–Podolsky–Rosen–Bohm experiment. We analyze the data of an Einstein–Podolsky–Rosen–Bohm experiment and debunk the popular statement that Einstein–Podolsky–Rosen–Bohm experiments have vindicated quantum theory. We argue that it is not quantum theory but the processing of data from EPRB experiments that should be questioned. We perform Einstein–Podolsky–Rosen–Bohm experiments on a superconducting quantum information processor to show that the event-by-event generation of discrete data can yield results that are in good agreement with the quantum-theoretical description of the Einstein–Podolsky–Rosen–Bohm thought experiment. We demonstrate that a stochastic and a subquantum model can also produce data that are in excellent agreement with the quantum-theoretical description of the Einstein–Podolsky–Rosen–Bohm thought experiment.</p

    Unbalanced penalization: A new approach to encode inequality constraints of combinatorial problems for quantum optimization algorithms

    Full text link
    Solving combinatorial optimization problems of the kind that can be codified by quadratic unconstrained binary optimization (QUBO) is a promising application of quantum computation. Some problems of this class suitable for practical applications such as the traveling salesman problem (TSP), the bin packing problem (BPP), or the knapsack problem (KP) have inequality constraints that require a particular cost function encoding. The common approach is the use of slack variables to represent the inequality constraints in the cost function. However, the use of slack variables increases considerably the number of qubits and operations required to solve these problems using quantum devices. In this work, we present an alternative method that does not require extra slack variables and consists of using an unbalanced penalization function to represent the inequality constraints in the QUBO. This function is characterized by having a larger penalization when the inequality constraint is not achieved than when it is. We tested our approach for the TSP, the BPP, and the KP. For all of them, we are able to encode the optimal solution in the vicinity of the cost Hamiltonian ground state. This new approach can be used to solve combinatorial problems with inequality constraints with a reduced number of resources compared to the slack variables approach using quantum annealing or variational quantum algorithms.Comment: 11 pages, 12 figure
    • …
    corecore