23 research outputs found
Optimal adaptation of surface-code decoders to local noise
Information obtained from noise characterization of a quantum device can be
used in classical decoding algorithms to improve the performance of quantum
error-correcting codes. Focusing on the surface code under local (i.e.
single-qubit) noise, we present a simple method to determine the maximum extent
to which adapting a surface-code decoder to a noise feature can lead to a
performance improvement. Our method is based on a tensor-network decoding
algorithm, which uses the syndrome information as well as a process matrix
description of the noise to compute a near-optimal correction. By selectively
mischaracterizing the noise model input to the decoder and measuring the
resulting loss in fidelity of the logical qubit, we can determine the relative
importance of individual noise parameters for decoding. We apply this method to
several physically relevant uncorrelated noise models with features such as
coherence, spatial inhomogeneity and bias. While noise generally requires many
parameters to describe completely, we find that to achieve near optimal
decoding it appears only necessary adapt the decoder to a small number of
critical parameters
Efficient Simulation of Leakage Errors in Quantum Error Correcting Codes Using Tensor Network Methods
Leakage errors, in which a qubit is excited to a level outside the qubit
subspace, represent a significant obstacle in the development of robust quantum
computers. We present a computationally efficient simulation methodology for
studying leakage errors in quantum error correcting codes (QECCs) using tensor
network methods, specifically Matrix Product States (MPS). Our approach enables
the simulation of various leakage processes, including thermal noise and
coherent errors, without approximations (such as the Pauli twirling
approximation) that can lead to errors in the estimation of the logical error
rate. We apply our method to two QECCs: the one-dimensional (1D) repetition
code and a thin surface code. By leveraging the small amount of
entanglement generated during the error correction process, we are able to
study large systems, up to a few hundred qudits, over many code cycles. We
consider a realistic noise model of leakage relevant to superconducting qubits
to evaluate code performance and a variety of leakage removal strategies. Our
numerical results suggest that appropriate leakage removal is crucial,
especially when the code distance is large.Comment: 14 pages, 12 figure
Low-depth random Clifford circuits for quantum coding against Pauli noise using a tensor-network decoder
Recent work [M. J. Gullans et al., Physical Review X, 11(3):031066 (2021)]
has shown that quantum error correcting codes defined by random Clifford
encoding circuits can achieve a non-zero encoding rate in correcting errors
even if the random circuits on qubits, embedded in one spatial dimension
(1D), have a logarithmic depth . However, this was
demonstrated only for a simple erasure noise model. In this work, we discover
that this desired property indeed holds for the conventional Pauli noise model.
Specifically, we numerically demonstrate that the hashing bound, i.e., a rate
known to be achieved with -depth random encoding circuits,
can be attained even when the circuit depth is restricted to
in 1D for depolarizing noise of various strengths. This
analysis is made possible with our development of a tensor-network
maximum-likelihood decoding algorithm that works efficiently for -depth
encoding circuits in 1D
Tailoring surface codes for highly biased noise
The surface code, with a simple modification, exhibits ultra-high error
correction thresholds when the noise is biased towards dephasing. Here, we
identify features of the surface code responsible for these ultra-high
thresholds. We provide strong evidence that the threshold error rate of the
surface code tracks the hashing bound exactly for all biases, and show how to
exploit these features to achieve significant improvement in logical failure
rate. First, we consider the infinite bias limit, meaning pure dephasing. We
prove that the error threshold of the modified surface code for pure dephasing
noise is , i.e., that all qubits are fully dephased, and this threshold
can be achieved by a polynomial time decoding algorithm. We demonstrate that
the sub-threshold behavior of the code depends critically on the precise shape
and boundary conditions of the code. That is, for rectangular surface codes
with standard rough/smooth open boundaries, it is controlled by the parameter
, where and are dimensions of the surface code lattice. We
demonstrate a significant improvement in logical failure rate with pure
dephasing for co-prime codes that have , and closely-related rotated
codes, which have a modified boundary. The effect is dramatic: the same logical
failure rate achievable with a square surface code and physical qubits can
be obtained with a co-prime or rotated surface code using only
physical qubits. Finally, we use approximate maximum likelihood decoding to
demonstrate that this improvement persists for a general Pauli noise biased
towards dephasing. In particular, comparing with a square surface code, we
observe a significant improvement in logical failure rate against biased noise
using a rotated surface code with approximately half the number of physical
qubits.Comment: 18+4 pages, 24 figures; v2 includes additional coauthor (ASD) and new
results on the performance of surface codes in the finite-bias regime,
obtained with beveled surface codes and an improved tensor network decoder;
v3 published versio