11 research outputs found
Parallel window decoding enables scalable fault tolerant quantum computation
Quantum Error Correction (QEC) continuously generates a stream of syndrome
data that contains information about the errors in the system. Useful
fault-tolerant quantum computation requires online decoders that are capable of
processing this syndrome data at the rate it is received. Otherwise, a data
backlog is created that grows exponentially with the -gate depth of the
computation. Superconducting quantum devices can perform QEC rounds in sub-1
s time, setting a stringent requirement on the speed of the decoders. All
current decoder proposals have a maximum code size beyond which the processing
of syndromes becomes too slow to keep up with the data acquisition, thereby
making the fault-tolerant computation not scalable. Here, we will present a
methodology that parallelizes the decoding problem and achieves almost
arbitrary syndrome processing speed. Our parallelization requires some
classical feedback decisions to be delayed, leading to a slow-down of the
logical clock speed. However, the slow-down is now polynomial in code size and
so an exponential backlog is averted. Furthermore, using known
auto-teleportation gadgets the slow-down can be eliminated altogether in
exchange for increased qubit overhead, all polynomially scaling. We demonstrate
our parallelization speed-up using a Python implementation, combining it with
both union-find and minimum weight perfect matching. Furthermore, we show that
the algorithm imposes no noticeable reduction in logical fidelity compared to
the original global decoder. Finally, we discuss how the same methodology can
be implemented in online hardware decoders.Comment: 12 pages, 7 figure
Cellular automaton decoders for topological quantum codes with noisy measurements and beyond
We propose an error correction procedure based on a cellular automaton, the sweep rule, which is applicable to a broad range of codes beyond topological quantum codes. For simplicity, however, we focus on the three-dimensional toric code on the rhombic dodecahedral lattice with boundaries and prove that the resulting local decoder has a non-zero error threshold. We also numerically benchmark the performance of the decoder in the setting with measurement errors using various noise models. We find that this error correction procedure is remarkably robust against measurement errors and is also essentially insensitive to the details of the lattice and noise model. Our work constitutes a step towards finding simple and high-performance decoding strategies for a wide range of quantum low-density parity-check codes
Quantum-classical processing and benchmarking at the pulse-level
Towards the practical use of quantum computers in the NISQ era, as well as
the realization of fault-tolerant quantum computers that utilize quantum error
correction codes, pressing needs have emerged for the control hardware and
software platforms. In particular, a clear demand has arisen for platforms that
allow classical processing to be integrated with quantum processing. While
recent works discuss the requirements for such quantum-classical processing
integration that is formulated at the gate-level, pulse-level discussions are
lacking and are critically important. Moreover, defining concrete performance
benchmarks for the control system at the pulse-level is key to the necessary
quantum-classical integration. In this work, we categorize the requirements for
quantum-classical processing at the pulse-level, demonstrate these requirements
with a variety of use cases, including recently published works, and propose
well-defined performance benchmarks for quantum control systems. We utilize a
comprehensive pulse-level language that allows embedding universal classical
processing in the quantum program and hence allows for a general formulation of
benchmarks. We expect the metrics defined in this work to form a solid basis to
continue to push the boundaries of quantum computing via control systems,
bridging the gap between low-level and application-level implementations with
relevant metrics.Comment: 22 page
Fragile boundaries of tailored surface codes
Biased noise is common in physical qubits, and tailoring a quantum code to
the bias by locally modifying stabilizers or changing boundary conditions has
been shown to greatly increase error correction thresholds. In this work, we
explore the challenges of using a specific tailored code, the XY surface code,
for fault-tolerant quantum computation. We introduce efficient and
fault-tolerant decoders, belief-matching and belief-find, which exploit
correlated hyperedge fault mechanisms present in circuit-level noise. Using
belief-matching, we find that the XY surface code has a higher threshold and
lower overhead than the square CSS surface code for moderately biased noise.
However, the rectangular CSS surface code has a lower qubit overhead than the
XY surface code when below threshold. We identify a contributor to the reduced
performance that we call fragile boundary errors. These are string-like errors
that can occur along spatial or temporal boundaries in planar architectures or
during logical state preparation and measurement. While we make partial
progress towards mitigating these errors by deforming the boundaries of the XY
surface code, our work suggests that fragility could remain a significant
obstacle, even for other tailored codes. We expect that our decoders will have
other uses; belief-find has an almost-linear running time, and we show that it
increases the threshold of the surface code to 0.937(2)% in the presence of
circuit-level depolarising noise, compared to 0.817(5)% for the more
computationally expensive minimum-weight perfect matching decoder.Comment: 16 pages, 17 figure
Efficient classical simulation of noisy random quantum circuits in one dimension
Understanding the computational power of noisy intermediate-scale quantum (NISQ) devices is of both fundamental and practical importance to quantum information science. Here, we address the question of whether error-uncorrected noisy quantum computers can provide computational advantage over classical computers. Specifically, we study noisy random circuit sampling in one dimension (or 1D noisy RCS) as a simple model for exploring the effects of noise on the computational power of a noisy quantum device. In particular, we simulate the real-time dynamics of 1D noisy random quantum circuits via matrix product operators (MPOs) and characterize the computational power of the 1D noisy quantum system by using a metric we call MPO entanglement entropy. The latter metric is chosen because it determines the cost of classical MPO simulation. We numerically demonstrate that for the two-qubit gate error rates we considered, there exists a characteristic system size above which adding more qubits does not bring about an exponential growth of the cost of classical MPO simulation of 1D noisy systems. Specifically, we show that above the characteristic system size, there is an optimal circuit depth, independent of the system size, where the MPO entanglement entropy is maximized. Most importantly, the maximum achievable MPO entanglement entropy is bounded by a constant that depends only on the gate error rate, not on the system size. We also provide a heuristic analysis to get the scaling of the maximum achievable MPO entanglement entropy as a function of the gate error rate. The obtained scaling suggests that although the cost of MPO simulation does not increase exponentially in the system size above a certain characteristic system size, it does increase exponentially as the gate error rate decreases, possibly making classical simulation practically not feasible even with state-of-the-art supercomputers
Efficient classical simulation of noisy random quantum circuits in one dimension
Understanding the computational power of noisy intermediate-scale quantum
(NISQ) devices is of both fundamental and practical importance to quantum
information science. Here, we address the question of whether error-uncorrected
noisy quantum computers can provide computational advantage over classical
computers. Specifically, we study noisy random circuit sampling in one
dimension (or 1D noisy RCS) as a simple model for exploring the effects of
noise on the computational power of a noisy quantum device. In particular, we
simulate the real-time dynamics of 1D noisy random quantum circuits via matrix
product operators (MPOs) and characterize the computational power of the 1D
noisy quantum system by using a metric we call MPO entanglement entropy. The
latter metric is chosen because it determines the cost of classical MPO
simulation. We numerically demonstrate that for the two-qubit gate error rates
we considered, there exists a characteristic system size above which adding
more qubits does not bring about an exponential growth of the cost of classical
MPO simulation of 1D noisy systems. Specifically, we show that above the
characteristic system size, there is an optimal circuit depth, independent of
the system size, where the MPO entanglement entropy is maximized. Most
importantly, the maximum achievable MPO entanglement entropy is bounded by a
constant that depends only on the gate error rate, not on the system size. We
also provide a heuristic analysis to get the scaling of the maximum achievable
MPO entanglement entropy as a function of the gate error rate. The obtained
scaling suggests that although the cost of MPO simulation does not increase
exponentially in the system size above a certain characteristic system size, it
does increase exponentially as the gate error rate decreases, possibly making
classical simulation practically not feasible even with state-of-the-art
supercomputers.Comment: 27 pages, 9 figures, accepted for publication in Quantu
Improved decoding of circuit noise and fragile boundaries of tailored surface codes
Realizing the full potential of quantum computation requires quantum error correction (QEC), with most recent breakthrough demonstrations of QEC using the surface code. QEC codes use multiple noisy physical qubits to encode information in fewer logical qubits, enabling the identification of errors through a decoding process. This process increases the logical fidelity (or accuracy) making the computation more reliable. However, most fast (efficient run-time) decoders neglect important noise characteristics, thereby reducing their accuracy. In this work, we introduce decoders that are both fast and accurate, and can be used with a wide class of QEC codes including the surface code. Our decoders, named belief-matching and belief-find, exploit all noise information and thereby unlock higher accuracy demonstrations of QEC. Using the surface code threshold as a performance metric, we observe a threshold at 0.94% error probability for our decoders, outperforming the 0.82% threshold for a standard minimum-weight perfect matching decoder. We also test our belief-matching decoders in a theoretical case study of codes tailored to a biased noise model. We find that the decoders lead to a much higher threshold and lower qubit overhead in the tailored surface code with respect to the standard, square surface code. Surprisingly, in the well-below-threshold regime, the rectangular surface code becomes more resource efficient than the tailored surface code due to a previously unnoticed phenomenon that we call “fragile boundaries.” Our decoders outperform all other fast decoders in terms of threshold and accuracy, enabling better results in current quantum-error-correction experiments and opening up new areas for theoretical case studies