3,454 research outputs found
Learning-based quantum error mitigation
If NISQ-era quantum computers are to perform useful tasks, they will need to
employ powerful error mitigation techniques. Quasi-probability methods can
permit perfect error compensation at the cost of additional circuit executions,
provided that the nature of the error model is fully understood and
sufficiently local both spatially and temporally. Unfortunately these
conditions are challenging to satisfy. Here we present a method by which the
proper compensation strategy can instead be learned ab initio. Our training
process uses multiple variants of the primary circuit where all non-Clifford
gates are substituted with gates that are efficient to simulate classically.
The process yields a configuration that is near-optimal versus noise in the
real system with its non-Clifford gate set. Having presented a range of
learning strategies, we demonstrate the power of the technique both with real
quantum hardware (IBM devices) and exactly-emulated imperfect quantum
computers. The systems suffer a range of noise severities and types, including
spatially and temporally correlated variants. In all cases the protocol
successfully adapts to the noise and mitigates it to a high degree.Comment: 28 pages, 19 figure
Low-cost error mitigation by symmetry verification
We investigate the performance of error mitigation via measurement of
conserved symmetries on near-term devices. We present two protocols to measure
conserved symmetries during the bulk of an experiment, and develop a zero-cost
post-processing protocol which is equivalent to a variant of the quantum
subspace expansion. We develop methods for inserting global and local symetries
into quantum algorithms, and for adjusting natural symmetries of the problem to
boost their mitigation against different error channels. We demonstrate these
techniques on two- and four-qubit simulations of the hydrogen molecule (using a
classical density-matrix simulator), finding up to an order of magnitude
reduction of the error in obtaining the ground state dissociation curve.Comment: Published versio
Digital zero noise extrapolation for quantum error mitigation
Zero-noise extrapolation (ZNE) is an increasingly popular technique for
mitigating errors in noisy quantum computations without using additional
quantum resources. We review the fundamentals of ZNE and propose several
improvements to noise scaling and extrapolation, the two key components in the
technique. We introduce unitary folding and parameterized noise scaling. These
are digital noise scaling frameworks, i.e. one can apply them using only
gate-level access common to most quantum instruction sets. We also study
different extrapolation methods, including a new adaptive protocol that uses a
statistical inference framework. Benchmarks of our techniques show error
reductions of 18X to 24X over non-mitigated circuits and demonstrate ZNE
effectiveness at larger qubit numbers than have been tested previously. In
addition to presenting new results, this work is a self-contained introduction
to the practical use of ZNE by quantum programmers.Comment: 11 pages, 7 figure
Testing platform-independent quantum error mitigation on noisy quantum computers
We apply quantum error mitigation techniques to a variety of benchmark
problems and quantum computers to evaluate the performance of quantum error
mitigation in practice. To do so, we define an empirically motivated,
resource-normalized metric of the improvement of error mitigation which we call
the improvement factor, and calculate this metric for each experiment we
perform. The experiments we perform consist of zero-noise extrapolation and
probabilistic error cancellation applied to two benchmark problems run on IBM,
IonQ, and Rigetti quantum computers, as well as noisy quantum computer
simulators. Our results show that error mitigation is on average more
beneficial than no error mitigation - even when normalized by the additional
resources used - but also emphasize that the performance of quantum error
mitigation depends on the underlying computer
Volumetric Benchmarking of Error Mitigation with Qermit
The detrimental effect of noise accumulates as quantum computers grow in
size. In the case where devices are too small or noisy to perform error
correction, error mitigation may be used. Error mitigation does not increase
the fidelity of quantum states, but instead aims to reduce the approximation
error in quantities of concern, such as expectation values of observables.
However, it is as yet unclear which circuit types, and devices of which
characteristics, benefit most from the use of error mitigation. Here we develop
a methodology to assess the performance of quantum error mitigation techniques.
Our benchmarks are volumetric in design, and are performed on different
superconducting hardware devices. Extensive classical simulations are also used
for comparison. We use these benchmarks to identify disconnects between the
predicted and practical performance of error mitigation protocols, and to
identify the situations in which their use is beneficial. To perform these
experiments, and for the benefit of the wider community, we introduce Qermit -
an open source python package for quantum error mitigation. Qermit supports a
wide range of error mitigation methods, is easily extensible and has a modular
graph-based software design that facilitates composition of error mitigation
protocols and subroutines.Comment: 25 pages, Comments welcom
Robust design under uncertainty in quantum error mitigation
Error mitigation techniques are crucial to achieving near-term quantum
advantage. Classical post-processing of quantum computation outcomes is a
popular approach for error mitigation, which includes methods such as Zero
Noise Extrapolation, Virtual Distillation, and learning-based error mitigation.
However, these techniques have limitations due to the propagation of
uncertainty resulting from a finite shot number of the quantum measurement. To
overcome this limitation, we propose general and unbiased methods for
quantifying the uncertainty and error of error-mitigated observables by
sampling error mitigation outcomes. These methods are applicable to any
post-processing-based error mitigation approach. In addition, we present a
systematic approach for optimizing the performance and robustness of these
error mitigation methods under uncertainty, building on our proposed
uncertainty quantification methods. To illustrate the effectiveness of our
methods, we apply them to Clifford Data Regression in the ground state of the
XY model simulated using IBM's Toronto noise model.Comment: 9 pages, 5 figure
- …