8 research outputs found

    On proving the robustness of algorithms for early fault-tolerant quantum computers

    Full text link
    The hope of the quantum computing field is that quantum architectures are able to scale up and realize fault-tolerant quantum computing. Due to engineering challenges, such "cheap" error correction may be decades away. In the meantime, we anticipate an era of "costly" error correction, or early fault-tolerant quantum computing. Costly error correction might warrant settling for error-prone quantum computations. This motivates the development of quantum algorithms which are robust to some degree of error as well as methods to analyze their performance in the presence of error. We introduce a randomized algorithm for the task of phase estimation and give an analysis of its performance under two simple noise models. In both cases the analysis leads to a noise threshold, below which arbitrarily high accuracy can be achieved by increasing the number of samples used in the algorithm. As an application of this general analysis, we compute the maximum ratio of the largest circuit depth and the dephasing scale such that performance guarantees hold. We calculate that the randomized algorithm can succeed with arbitrarily high probability as long as the required circuit depth is less than 0.916 times the dephasing scale.Comment: 27 pages, 3 figures, 1 table, 1 algorithm. To be submitted to QIP 202

    Early Fault-Tolerant Quantum Computing

    Full text link
    Over the past decade, research in quantum computing has tended to fall into one of two camps: near-term intermediate scale quantum (NISQ) and fault-tolerant quantum computing (FTQC). Yet, a growing body of work has been investigating how to use quantum computers in transition between these two eras. This envisions operating with tens of thousands to millions of physical qubits, able to support fault-tolerant protocols, though operating close to the fault-tolerant threshold. Two challenges emerge from this picture: how to model the performance of devices that are continually improving and how to design algorithms to make the most use of these devices? In this work we develop a model for the performance of early fault-tolerant quantum computing (EFTQC) architectures and use this model to elucidate the regimes in which algorithms suited to such architectures are advantageous. As a concrete example, we show that, for the canonical task of phase estimation, in a regime of moderate scalability and using just over one million physical qubits, the ``reach'' of the quantum computer can be extended (compared to the standard approach) from 90-qubit instances to over 130-qubit instances using a simple early fault-tolerant quantum algorithm, which reduces the number of operations per circuit by a factor of 100 and increases the number of circuit repetitions by a factor of 10,000. This clarifies the role that such algorithms might play in the era of limited-scalability quantum computing.Comment: 20 pages, 8 figures with desmos links, plus appendi

    Generation of High-Resolution Handwritten Digits with an Ion-Trap Quantum Computer

    Full text link
    Generating high-quality data (e.g. images or video) is one of the most exciting and challenging frontiers in unsupervised machine learning. Utilizing quantum computers in such tasks to potentially enhance conventional machine learning algorithms has emerged as a promising application, but poses big challenges due to the limited number of qubits and the level of gate noise in available devices. In this work, we provide the first practical and experimental implementation of a quantum-classical generative algorithm capable of generating high-resolution images of handwritten digits with state-of-the-art gate-based quantum computers. In our quantum-assisted machine learning framework, we implement a quantum-circuit based generative model to learn and sample the prior distribution of a Generative Adversarial Network. We introduce a multi-basis technique that leverages the unique possibility of measuring quantum states in different bases, hence enhancing the expressivity of the prior distribution. We train this hybrid algorithm on an ion-trap device based on 171^{171}Yb+^{+} ion qubits to generate high-quality images and quantitatively outperform comparable classical Generative Adversarial Networks trained on the popular MNIST data set for handwritten digits.Comment: 10 pages, 8 figures (more details and discussion in main text for clarity

    Analyzing the Performance of Variational Quantum Factoring on a Superconducting Quantum Processor

    Full text link
    In the near-term, hybrid quantum-classical algorithms hold great potential for outperforming classical approaches. Understanding how these two computing paradigms work in tandem is critical for identifying areas where such hybrid algorithms could provide a quantum advantage. In this work, we study a QAOA-based quantum optimization algorithm by implementing the Variational Quantum Factoring (VQF) algorithm. We execute experimental demonstrations using a superconducting quantum processor and investigate the trade-off between quantum resources (number of qubits and circuit depth) and the probability that a given biprime is successfully factored. In our experiments, the integers 1099551473989, 3127, and 6557 are factored with 3, 4, and 5 qubits, respectively, using a QAOA ansatz with up to 8 layers and we are able to identify the optimal number of circuit layers for a given instance to maximize success probability. Furthermore, we demonstrate the impact of different noise sources on the performance of QAOA and reveal the coherent error caused by the residual ZZ-coupling between qubits as a dominant source of error in the superconducting quantum processor

    Noise tailoring for robust amplitude estimation

    No full text
    A universal fault-tolerant quantum computer holds the promise to speed up computational problems that are otherwise intractable on classical computers; however, for the next decade or so, our access is restricted to noisy intermediate-scale quantum (NISQ) computers and, perhaps, early fault tolerant (EFT) quantum computers. This motivates the development of many near-term quantum algorithms including robust amplitude estimation (RAE), which is a quantum-enhanced algorithm for estimating expectation values. One obstacle to using RAE has been a paucity of ways of getting realistic error models incorporated into this algorithm. So far the impact of device noise on RAE is incorporated into one of its subroutines as an exponential decay model, which is unrealistic for NISQ devices and, maybe, for EFT devices; this hinders the performance of RAE. Rather than trying to explicitly model realistic noise effects, which may be infeasible, we circumvent this obstacle by tailoring device noise using randomized compiling to generate an effective noise model, whose impact on RAE closely resembles that of the exponential decay model. Using noisy simulations, we show that our noise-tailored RAE algorithm is able to regain improvements in both bias and precision that are expected for RAE. Additionally, on IBM’s quantum computer ibmq_belem our algorithm demonstrates advantage over the standard estimation technique in reducing bias. Thus, our work extends the feasibility of RAE on NISQ computers, consequently bringing us one step closer towards achieving quantum advantage using these devices

    Quantum algorithm for credit valuation adjustments

    No full text
    Quantum mechanics is well known to accelerate statistical sampling processes over classical techniques. In quantitative finance, statistical samplings arise broadly in many use cases. Here we focus on a particular one of such use cases, credit valuation adjustment (CVA), and identify opportunities and challenges towards quantum advantage for practical instances. To build a NISQ-friendly quantum circuit able to solve such problem, we draw on various heuristics that indicate the potential for significant improvement over well-known techniques such as reversible logical circuit synthesis. In minimizing the resource requirements for amplitude amplification while maximizing the speedup gained from the quantum coherence of a noisy device, we adopt a recently developed Bayesian variant of quantum amplitude estimation using engineered likelihood functions. We perform numerical analyses to characterize the prospect of quantum speedup in concrete CVA instances over classical Monte Carlo simulations
    corecore