9,004 research outputs found

    On connectivity-dependent resource requirements for digital quantum simulation of dd-level particles

    Full text link
    A primary objective of quantum computation is to efficiently simulate quantum physics. Scientifically and technologically important quantum Hamiltonians include those with spin-ss, vibrational, photonic, and other bosonic degrees of freedom, i.e. problems composed of or approximated by dd-level particles (qudits). Recently, several methods for encoding these systems into a set of qubits have been introduced, where each encoding's efficiency was studied in terms of qubit and gate counts. Here, we build on previous results by including effects of hardware connectivity. To study the number of SWAP gates required to Trotterize commonly used quantum operators, we use both analytical arguments and automatic tools that optimize the schedule in multiple stages. We study the unary (or one-hot), Gray, standard binary, and block unary encodings, with three connectivities: linear array, ladder array, and square grid. Among other trends, we find that while the ladder array leads to substantial efficiencies over the linear array, the advantage of the square over the ladder array is less pronounced. These results are applicable in hardware co-design and in choosing efficient qudit encodings for a given set of near-term quantum hardware. Additionally, this work may be relevant to the scheduling of other quantum algorithms for which matrix exponentiation is a subroutine.Comment: Accepted to QCE20 (IEEE Quantum Week). Corrected erroneous circuits in Figure

    Noise-Adaptive Compiler Mappings for Noisy Intermediate-Scale Quantum Computers

    Full text link
    A massive gap exists between current quantum computing (QC) prototypes, and the size and scale required for many proposed QC algorithms. Current QC implementations are prone to noise and variability which affect their reliability, and yet with less than 80 quantum bits (qubits) total, they are too resource-constrained to implement error correction. The term Noisy Intermediate-Scale Quantum (NISQ) refers to these current and near-term systems of 1000 qubits or less. Given NISQ's severe resource constraints, low reliability, and high variability in physical characteristics such as coherence time or error rates, it is of pressing importance to map computations onto them in ways that use resources efficiently and maximize the likelihood of successful runs. This paper proposes and evaluates backend compiler approaches to map and optimize high-level QC programs to execute with high reliability on NISQ systems with diverse hardware characteristics. Our techniques all start from an LLVM intermediate representation of the quantum program (such as would be generated from high-level QC languages like Scaffold) and generate QC executables runnable on the IBM Q public QC machine. We then use this framework to implement and evaluate several optimal and heuristic mapping methods. These methods vary in how they account for the availability of dynamic machine calibration data, the relative importance of various noise parameters, the different possible routing strategies, and the relative importance of compile-time scalability versus runtime success. Using real-system measurements, we show that fine grained spatial and temporal variations in hardware parameters can be exploited to obtain an average 2.92.9x (and up to 1818x) improvement in program success rate over the industry standard IBM Qiskit compiler.Comment: To appear in ASPLOS'1

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl

    On the Effect of Quantum Interaction Distance on Quantum Addition Circuits

    Full text link
    We investigate the theoretical limits of the effect of the quantum interaction distance on the speed of exact quantum addition circuits. For this study, we exploit graph embedding for quantum circuit analysis. We study a logical mapping of qubits and gates of any Ω(logn)\Omega(\log n)-depth quantum adder circuit for two nn-qubit registers onto a practical architecture, which limits interaction distance to the nearest neighbors only and supports only one- and two-qubit logical gates. Unfortunately, on the chosen kk-dimensional practical architecture, we prove that the depth lower bound of any exact quantum addition circuits is no longer Ω(logn)\Omega(\log {n}), but Ω(nk)\Omega(\sqrt[k]{n}). This result, the first application of graph embedding to quantum circuits and devices, provides a new tool for compiler development, emphasizes the impact of quantum computer architecture on performance, and acts as a cautionary note when evaluating the time performance of quantum algorithms.Comment: accepted for ACM Journal on Emerging Technologies in Computing System
    corecore