29,995 research outputs found

    Probing context-dependent errors in quantum processors

    Full text link
    Gates in error-prone quantum information processors are often modeled using sets of one- and two-qubit process matrices, the standard model of quantum errors. However, the results of quantum circuits on real processors often depend on additional external "context" variables. Such contexts may include the state of a spectator qubit, the time of data collection, or the temperature of control electronics. In this article we demonstrate a suite of simple, widely applicable, and statistically rigorous methods for detecting context dependence in quantum circuit experiments. They can be used on any data that comprise two or more "pools" of measurement results obtained by repeating the same set of quantum circuits in different contexts. These tools may be integrated seamlessly into standard quantum device characterization techniques, like randomized benchmarking or tomography. We experimentally demonstrate these methods by detecting and quantifying crosstalk and drift on the publicly accessible 16-qubit ibmqx3.Comment: 11 pages, 3 figures, code and data available in source file

    Priorities for a New Decade: Making (More) Social Programs Work (Better)

    Get PDF
    In this whitepaper, P/PV proposes a comprehensive and bold re-thinking of how nonprofits are evaluated. Priorities for a New Decade puts forward an approach that fully engages practitioners as partners in evaluation efforts, reflects a deep understanding of local circumstances, and suggests guidelines for evaluation and scaling that support on-the-ground program quality and performance

    Benchmarking 2D hydraulic models for urban flood simulations

    Get PDF
    This paper describes benchmark testing of six two-dimensional (2D) hydraulic models (DIVAST, DIVASTTVD, TUFLOW, JFLOW, TRENT and LISFLOOD-FP) in terms of their ability to simulate surface flows in a densely urbanised area. The models are applied to a 1·0 km × 0·4 km urban catchment within the city of Glasgow, Scotland, UK, and are used to simulate a flood event that occurred at this site on 30 July 2002. An identical numerical grid describing the underlying topography is constructed for each model, using a combination of airborne laser altimetry (LiDAR) fused with digital map data, and used to run a benchmark simulation. Two numerical experiments were then conducted to test the response of each model to topographic error and uncertainty over friction parameterisation. While all the models tested produce plausible results, subtle differences between particular groups of codes give considerable insight into both the practice and science of urban hydraulic modelling. In particular, the results show that the terrain data available from modern LiDAR systems are sufficiently accurate and resolved for simulating urban flows, but such data need to be fused with digital map data of building topology and land use to gain maximum benefit from the information contained therein. When such terrain data are available, uncertainty in friction parameters becomes a more dominant factor than topographic error for typical problems. The simulations also show that flows in urban environments are characterised by numerous transitions to supercritical flow and numerical shocks. However, the effects of these are localised and they do not appear to affect overall wave propagation. In contrast, inertia terms are shown to be important in this particular case, but the specific characteristics of the test site may mean that this does not hold more generally

    Theory summary. Hard Probes 2012

    Full text link
    I provide a summary of the theoretical talks in Hard Probes 2012 together with some personal thoughts about the present and the future of the field.Comment: 8 pages. Proceedings of the conference Hard Probes 2012 - Sardinia - Italy - May 27 -June 1 2012 --- Comments welcom

    Randomized benchmarking with gate-dependent noise

    Full text link
    We analyze randomized benchmarking for arbitrary gate-dependent noise and prove that the exact impact of gate-dependent noise can be described by a single perturbation term that decays exponentially with the sequence length. That is, the exact behavior of randomized benchmarking under general gate-dependent noise converges exponentially to a true exponential decay of exactly the same form as that predicted by previous analysis for gate-independent noise. Moreover, we show that the operational meaning of the decay parameter for gate-dependent noise is essentially unchanged, that is, we show that it quantifies the average fidelity of the noise between ideal gates. We numerically demonstrate that our analysis is valid for strongly gate-dependent noise models. We also show why alternative analyses do not provide a rigorous justification for the empirical success of randomized benchmarking with gate-dependent noise.Comment: It measures what you expect. Comments welcome. v2: removed an inconsistent assumption from theorem 3 and clarified discussion of prior work. Results unchanged. v3: further clarified discussion of prior work, numerics now available at https://github.com/jjwallman/numerics. v4: licence change as required by Quantu

    Multi-qubit Randomized Benchmarking Using Few Samples

    Full text link
    Randomized benchmarking (RB) is an efficient and robust method to characterize gate errors in quantum circuits. Averaging over random sequences of gates leads to estimates of gate errors in terms of the average fidelity. These estimates are isolated from the state preparation and measurement errors that plague other methods like channel tomography and direct fidelity estimation. A decisive factor in the feasibility of randomized benchmarking is the number of sampled sequences required to obtain rigorous confidence intervals. Previous bounds were either prohibitively loose or required the number of sampled sequences to scale exponentially with the number of qubits in order to obtain a fixed confidence interval at a fixed error rate. Here we show that, with a small adaptation to the randomized benchmarking procedure, the number of sampled sequences required for a fixed confidence interval is dramatically smaller than could previously be justified. In particular, we show that the number of sampled sequences required is essentially independent of the number of qubits and scales favorably with the average error rate of the system under investigation. We also show that the number of samples required for long sequence lengths can be made substantially smaller than previous rigorous results (even for single qubits) as long as the noise process under investigation is not unitary. Our results bring rigorous randomized benchmarking on systems with many qubits into the realm of experimental feasibility.Comment: v3: Added discussion of the impact of variance heteroskedasticity on the RB fitting procedure. Close to published versio

    Learning From the Community: Effective Financial Management Practices in the Arts

    Get PDF
    Provides financial management practices identified from a survey of directors at leading arts organizations, in order to understand how their practices could be used across the arts sector. Includes a framework for developing self-assessment tools

    Approaches to Interpreter Composition

    Get PDF
    In this paper, we compose six different Python and Prolog VMs into 4 pairwise compositions: one using C interpreters; one running on the JVM; one using meta-tracing interpreters; and one using a C interpreter and a meta-tracing interpreter. We show that programs that cross the language barrier frequently execute faster in a meta-tracing composition, and that meta-tracing imposes a significantly lower overhead on composed programs relative to mono-language programs.Comment: 33 pages, 1 figure, 9 table

    Simulation benchmarks for low-pressure plasmas: capacitive discharges

    Get PDF
    Benchmarking is generally accepted as an important element in demonstrating the correctness of computer simulations. In the modern sense, a benchmark is a computer simulation result that has evidence of correctness, is accompanied by estimates of relevant errors, and which can thus be used as a basis for judging the accuracy and efficiency of other codes. In this paper, we present four benchmark cases related to capacitively coupled discharges. These benchmarks prescribe all relevant physical and numerical parameters. We have simulated the benchmark conditions using five independently developed particle-in-cell codes. We show that the results of these simulations are statistically indistinguishable, within bounds of uncertainty that we define. We therefore claim that the results of these simulations represent strong benchmarks, that can be used as a basis for evaluating the accuracy of other codes. These other codes could include other approaches than particle-in-cell simulations, where benchmarking could examine not just implementation accuracy and efficiency, but also the fidelity of different physical models, such as moment or hybrid models. We discuss an example of this kind in an appendix. Of course, the methodology that we have developed can also be readily extended to a suite of benchmarks with coverage of a wider range of physical and chemical phenomena
    corecore