156 research outputs found
Neural Decoder for Topological Codes using Pseudo-Inverse of Parity Check Matrix
Recent developments in the field of deep learning have motivated many
researchers to apply these methods to problems in quantum information. Torlai
and Melko first proposed a decoder for surface codes based on neural networks.
Since then, many other researchers have applied neural networks to study a
variety of problems in the context of decoding. An important development in
this regard was due to Varsamopoulos et al. who proposed a two-step decoder
using neural networks. Subsequent work of Maskara et al. used the same concept
for decoding for various noise models. We propose a similar two-step neural
decoder using inverse parity-check matrix for topological color codes. We show
that it outperforms the state-of-the-art performance of non-neural decoders for
independent Pauli errors noise model on a 2D hexagonal color code. Our final
decoder is independent of the noise model and achieves a threshold of .
Our result is comparable to the recent work on neural decoder for quantum error
correction by Maskara et al.. It appears that our decoder has significant
advantages with respect to training cost and complexity of the network for
higher lengths when compared to that of Maskara et al.. Our proposed method can
also be extended to arbitrary dimension and other stabilizer codes.Comment: 12 pages, 12 figures, 2 tables, submitted to the 2019 IEEE
International Symposium on Information Theor
Scalable Neural Network Decoders for Higher Dimensional Quantum Codes
Machine learning has the potential to become an important tool in quantum
error correction as it allows the decoder to adapt to the error distribution of
a quantum chip. An additional motivation for using neural networks is the fact
that they can be evaluated by dedicated hardware which is very fast and
consumes little power. Machine learning has been previously applied to decode
the surface code. However, these approaches are not scalable as the training
has to be redone for every system size which becomes increasingly difficult. In
this work the existence of local decoders for higher dimensional codes leads us
to use a low-depth convolutional neural network to locally assign a likelihood
of error on each qubit. For noiseless syndrome measurements, numerical
simulations show that the decoder has a threshold of around when
applied to the 4D toric code. When the syndrome measurements are noisy, the
decoder performs better for larger code sizes when the error probability is
low. We also give theoretical and numerical analysis to show how a
convolutional neural network is different from the 1-nearest neighbor
algorithm, which is a baseline machine learning method
Deep Q-learning decoder for depolarizing noise on the toric code
We present an AI-based decoding agent for quantum error correction of
depolarizing noise on the toric code. The agent is trained using deep
reinforcement learning (DRL), where an artificial neural network encodes the
state-action Q-values of error-correcting , , and Pauli operations,
occurring with probabilities , , and , respectively. By learning
to take advantage of the correlations between bit-flip and phase-flip errors,
the decoder outperforms the minimum-weight-perfect-matching (MWPM) algorithm,
achieving higher success rate and higher error threshold for depolarizing noise
(), for code distances . The decoder trained on
depolarizing noise also has close to optimal performance for uncorrelated noise
and provides functional but sub-optimal decoding for biased noise (). We argue that the DRL-type decoder provides a promising framework
for future practical error correction of topological codes, striking a balance
between on-the-fly calculations, in the form of forward evaluation of a deep
Q-network, and pre-training and information storage. The complete code, as well
as ready-to-use decoders (pre-trained networks), can be found in the repository
https://github.com/mats-granath/toric-RL-decoder.Comment: 8+10 pages, 10+8 figure
Error-rate-agnostic decoding of topological stabilizer codes
Efficient high-performance decoding of topological stabilizer codes has the
potential to crucially improve the balance between logical failure rates and
the number and individual error rates of the constituent qubits. High-threshold
maximum-likelihood decoders require an explicit error model for Pauli errors to
decode a specific syndrome, whereas lower-threshold heuristic approaches such
as minimum weight matching are "error agnostic". Here we consider an
intermediate approach, formulating a decoder that depends on the bias, i.e.,
the relative probability of phase-flip to bit-flip errors, but is agnostic to
error rate. Our decoder is based on counting the number and effective weight of
the most likely error chains in each equivalence class of a given syndrome. We
use Metropolis-based Monte Carlo sampling to explore the space of error chains
and find unique chains, that are efficiently identified using a hash table.
Using the error-rate invariance the decoder can sample chains effectively at an
error rate which is higher than the physical error rate and without the need
for "thermalization" between chains in different equivalence classes. Applied
to the surface code and the XZZX code, the decoder matches maximum-likelihood
decoders for moderate code sizes or low error rates. We anticipate that,
because of the compressed information content per syndrome, it can be taken
full advantage of in combination with machine-learning methods to extrapolate
Monte Carlo-generated data.Comment: 15 pages, 9 figures; V2 Added analysis of low error-rate performanc
Real-Time Decoding for Fault-Tolerant Quantum Computing: Progress, Challenges and Outlook
Quantum computing is poised to solve practically useful problems which are
computationally intractable for classical supercomputers. However, the current
generation of quantum computers are limited by errors that may only partially
be mitigated by developing higher-quality qubits. Quantum error correction
(QEC) will thus be necessary to ensure fault tolerance. QEC protects the
logical information by cyclically measuring syndrome information about the
errors. An essential part of QEC is the decoder, which uses the syndrome to
compute the likely effect of the errors on the logical degrees of freedom and
provide a tentative correction. The decoder must be accurate, fast enough to
keep pace with the QEC cycle (e.g., on a microsecond timescale for
superconducting qubits) and with hard real-time system integration to support
logical operations. As such, real-time decoding is essential to realize
fault-tolerant quantum computing and to achieve quantum advantage. In this
work, we highlight some of the key challenges facing the implementation of
real-time decoders while providing a succinct summary of the progress to-date.
Furthermore, we lay out our perspective for the future development and provide
a possible roadmap for the field of real-time decoding in the next few years.
As the quantum hardware is anticipated to scale up, this perspective article
will provide a guidance for researchers, focusing on the most pressing issues
in real-time decoding and facilitating the development of solutions across
quantum and computer science
Scalable Neural Decoder for Topological Surface Codes
With the advent of noisy intermediate-scale quantum (NISQ) devices, practical
quantum computing has seemingly come into reach. However, to go beyond
proof-of-principle calculations, the current processing architectures will need
to scale up to larger quantum circuits which in turn will require fast and
scalable algorithms for quantum error correction. Here we present a neural
network based decoder that, for a family of stabilizer codes subject to
depolarizing noise, is scalable to tens of thousands of qubits (in contrast to
other recent machine learning inspired decoders) and exhibits faster decoding
times than the state-of-the-art union find decoder for a wide range of error
rates (down to 1%). The key innovation is to autodecode error syndromes on
small scales by shifting a preprocessing window over the underlying code, akin
to a convolutional neural network in pattern recognition approaches. We show
that such a preprocessing step allows to effectively reduce the error rate by
up to two orders of magnitude in practical applications and, by detecting
correlation effects, shifts the actual error threshold to , some ten percent higher than the threshold of conventional error
correction algorithms such as union find or minimum weight perfect matching. An
in-situ implementation of such machine learning-assisted quantum error
correction will be a decisive step to push the entanglement frontier beyond the
NISQ horizon.Comment: 9 pages, 8 figures, 5 table
A NEAT Quantum Error Decoder
We investigate the use of the evolutionary NEAT algorithm for the
optimization of a policy network that performs quantum error decoding on the
toric code, with bitflip and depolarizing noise, one qubit at a time. We find
that these NEAT-optimized network decoders have similar performance to
previously reported machine-learning based decoders, but use roughly three to
four orders of magnitude fewer parameters to do so.Comment: 10 pages, 7 figure
- …