29 research outputs found

    Exploring Practical Methodologies for the Characterization and Control of Small Quantum Systems

    Get PDF
    We explore methodologies for characterizing and controlling small quantum systems. We are interested in starting with a description of a quantum system, designing estimators for parameters of the system, developing robust and high-fidelity gates for the system using knowledge of these parameters, and experimentally verifying the performance of these gates. A strong emphasis is placed on using rigorous statistical methods, especially Bayesian ones, to analyze quantum system data. Throughout this thesis, the Nitrogen Vacancy system is used as an experimental testbed. Characterization of system parameters is done using quantum Hamiltonian learning, where we explore the use of adaptive experiment design to speed up learning rates. Gates for the full three-level system are designed with numerical optimal control methods that take into account imperfections of the control hardware. Gate quality is assessed using randomized benchmarking protocols, including standard randomized benchmarking, unitarity benchmarking, and leakage/loss benchmarking

    Benchmarking Quantum Processor Performance at Scale

    Full text link
    As quantum processors grow, new performance benchmarks are required to capture the full quality of the devices at scale. While quantum volume is an excellent benchmark, it focuses on the highest quality subset of the device and so is unable to indicate the average performance over a large number of connected qubits. Furthermore, it is a discrete pass/fail and so is not reflective of continuous improvements in hardware nor does it provide quantitative direction to large-scale algorithms. For example, there may be value in error mitigated Hamiltonian simulation at scale with devices unable to pass strict quantum volume tests. Here we discuss a scalable benchmark which measures the fidelity of a connecting set of two-qubit gates over NN qubits by measuring gate errors using simultaneous direct randomized benchmarking in disjoint layers. Our layer fidelity can be easily related to algorithmic run time, via γ\gamma defined in Ref.\cite{berg2022probabilistic} that can be used to estimate the number of circuits required for error mitigation. The protocol is efficient and obtains all the pair rates in the layered structure. Compared to regular (isolated) RB this approach is sensitive to crosstalk. As an example we measure a N=80 (100)N=80~(100) qubit layer fidelity on a 127 qubit fixed-coupling "Eagle" processor (ibm\_sherbrooke) of 0.26(0.19) and on the 133 qubit tunable-coupling "Heron" processor (ibm\_montecarlo) of 0.61(0.26). This can easily be expressed as a layer size independent quantity, error per layered gate (EPLG), which is here 1.7×102(1.7×102)1.7\times10^{-2}(1.7\times10^{-2}) for ibm\_sherbrooke and 6.2×103(1.2×102)6.2\times10^{-3}(1.2\times10^{-2}) for ibm\_montecarlo.Comment: 15 pages, 8 figures (including appendices

    Randomized compiling for scalable quantum computing on a noisy superconducting quantum processor

    Full text link
    The successful implementation of algorithms on quantum processors relies on the accurate control of quantum bits (qubits) to perform logic gate operations. In this era of noisy intermediate-scale quantum (NISQ) computing, systematic miscalibrations, drift, and crosstalk in the control of qubits can lead to a coherent form of error which has no classical analog. Coherent errors severely limit the performance of quantum algorithms in an unpredictable manner, and mitigating their impact is necessary for realizing reliable quantum computations. Moreover, the average error rates measured by randomized benchmarking and related protocols are not sensitive to the full impact of coherent errors, and therefore do not reliably predict the global performance of quantum algorithms, leaving us unprepared to validate the accuracy of future large-scale quantum computations. Randomized compiling is a protocol designed to overcome these performance limitations by converting coherent errors into stochastic noise, dramatically reducing unpredictable errors in quantum algorithms and enabling accurate predictions of algorithmic performance from error rates measured via cycle benchmarking. In this work, we demonstrate significant performance gains under randomized compiling for the four-qubit quantum Fourier transform algorithm and for random circuits of variable depth on a superconducting quantum processor. Additionally, we accurately predict algorithm performance using experimentally-measured error rates. Our results demonstrate that randomized compiling can be utilized to maximally-leverage and predict the capabilities of modern-day noisy quantum processors, paving the way forward for scalable quantum computing

    Limits on the ultra-bright Fast Radio Burst population from the CHIME Pathfinder

    Full text link
    We present results from a new incoherent-beam Fast Radio Burst (FRB) search on the Canadian Hydrogen Intensity Mapping Experiment (CHIME) Pathfinder. Its large instantaneous field of view (FoV) and relative thermal insensitivity allow us to probe the ultra-bright tail of the FRB distribution, and to test a recent claim that this distribution's slope, αlogNlogS\alpha\equiv-\frac{\partial \log N}{\partial \log S}, is quite small. A 256-input incoherent beamformer was deployed on the CHIME Pathfinder for this purpose. If the FRB distribution were described by a single power-law with α=0.7\alpha=0.7, we would expect an FRB detection every few days, making this the fastest survey on sky at present. We collected 1268 hours of data, amounting to one of the largest exposures of any FRB survey, with over 2.4\,×\times\,105^5\,deg2^2\,hrs. Having seen no bursts, we have constrained the rate of extremely bright events to < ⁣13<\!13\,sky1^{-1}\,day1^{-1} above \sim\,220(τ/ms)\sqrt{(\tau/\rm ms)} Jy\,ms for τ\tau between 1.3 and 100\,ms, at 400--800\,MHz. The non-detection also allows us to rule out α0.9\alpha\lesssim0.9 with 95%\% confidence, after marginalizing over uncertainties in the GBT rate at 700--900\,MHz, though we show that for a cosmological population and a large dynamic range in flux density, α\alpha is brightness-dependent. Since FRBs now extend to large enough distances that non-Euclidean effects are significant, there is still expected to be a dearth of faint events and relative excess of bright events. Nevertheless we have constrained the allowed number of ultra-intense FRBs. While this does not have significant implications for deeper, large-FoV surveys like full CHIME and APERTIF, it does have important consequences for other wide-field, small dish experiments

    The Baryon Oscillation Spectroscopic Survey of SDSS-III

    Get PDF
    The Baryon Oscillation Spectroscopic Survey (BOSS) is designed to measure the scale of baryon acoustic oscillations (BAO) in the clustering of matter over a larger volume than the combined efforts of all previous spectroscopic surveys of large scale structure. BOSS uses 1.5 million luminous galaxies as faint as i=19.9 over 10,000 square degrees to measure BAO to redshifts z<0.7. Observations of neutral hydrogen in the Lyman alpha forest in more than 150,000 quasar spectra (g<22) will constrain BAO over the redshift range 2.15<z<3.5. Early results from BOSS include the first detection of the large-scale three-dimensional clustering of the Lyman alpha forest and a strong detection from the Data Release 9 data set of the BAO in the clustering of massive galaxies at an effective redshift z = 0.57. We project that BOSS will yield measurements of the angular diameter distance D_A to an accuracy of 1.0% at redshifts z=0.3 and z=0.57 and measurements of H(z) to 1.8% and 1.7% at the same redshifts. Forecasts for Lyman alpha forest constraints predict a measurement of an overall dilation factor that scales the highly degenerate D_A(z) and H^{-1}(z) parameters to an accuracy of 1.9% at z~2.5 when the survey is complete. Here, we provide an overview of the selection of spectroscopic targets, planning of observations, and analysis of data and data quality of BOSS.Comment: 49 pages, 16 figures, accepted by A
    corecore