145 research outputs found
Neural Decoder for Topological Codes using Pseudo-Inverse of Parity Check Matrix
Recent developments in the field of deep learning have motivated many
researchers to apply these methods to problems in quantum information. Torlai
and Melko first proposed a decoder for surface codes based on neural networks.
Since then, many other researchers have applied neural networks to study a
variety of problems in the context of decoding. An important development in
this regard was due to Varsamopoulos et al. who proposed a two-step decoder
using neural networks. Subsequent work of Maskara et al. used the same concept
for decoding for various noise models. We propose a similar two-step neural
decoder using inverse parity-check matrix for topological color codes. We show
that it outperforms the state-of-the-art performance of non-neural decoders for
independent Pauli errors noise model on a 2D hexagonal color code. Our final
decoder is independent of the noise model and achieves a threshold of .
Our result is comparable to the recent work on neural decoder for quantum error
correction by Maskara et al.. It appears that our decoder has significant
advantages with respect to training cost and complexity of the network for
higher lengths when compared to that of Maskara et al.. Our proposed method can
also be extended to arbitrary dimension and other stabilizer codes.Comment: 12 pages, 12 figures, 2 tables, submitted to the 2019 IEEE
International Symposium on Information Theor
Modern Approaches to Topological Quantum Error Correction
The construction of a large-scale fault-tolerant quantum computer is an outstanding scientiļ¬c and technological goal. It holds the promise to allow us to solve a variety of complex problems such as factoring large numbers, quick database search, and the quantum simulation of many-body quantum systems in ļ¬elds as diverse as condensed matter, quantum chemistry, and even high-energy physics. Sophisticated theoretical protocols for reliable quantum information processing under imperfect conditions have been de-veloped, when errors aļ¬ect and corrupt the fragile quantum states during storage and computations. Arguably, the most realistic and promising ap-proach towards practical fault-tolerant quantum computation are topologi-cal quantum error-correcting codes, where quantum information is stored in interacting, topologically ordered 2D or 3D many-body quantum systems. This approach oļ¬ers the highest known error thresholds, which are already today within reach of the experimental accuracy in state-of-the-art setups. A combination of theoretical and experimental research is needed to store, protect and process fragile quantum information in logical qubits eļ¬ectively so that they can outperform their constituting physical qubits. Whereas small-scale quantum error correction codes have been implemented, one of the main theoretical challenges remains to develop new and improve existing eļ¬cient strategies (so-called decoders) to derive (near-)optimal error cor-rection operations in the presence of experimentally accessible measurement information and realistic noise sources. One main focus of this project is the development and numerical implementation of scalable, eļ¬cient decoders to operate topological color codes. Additionally, we study the feasibility of im-plementing quantum error-correcting codes fault-tolerantly in near-term ion traps. To this end, we use realistic modeling of the diļ¬erent noise sources, computer simulations, and most modern quantum information approaches to quantum circuitry and noise suppression techniques
qecGPT: decoding Quantum Error-correcting Codes with Generative Pre-trained Transformers
We propose a general framework for decoding quantum error-correcting codes
with generative modeling. The model utilizes autoregressive neural networks,
specifically Transformers, to learn the joint probability of logical operators
and syndromes. This training is in an unsupervised way, without the need for
labeled training data, and is thus referred to as pre-training. After the
pre-training, the model can efficiently compute the likelihood of logical
operators for any given syndrome, using maximum likelihood decoding. It can
directly generate the most-likely logical operators with computational
complexity in the number of logical qubits , which is
significantly better than the conventional maximum likelihood decoding
algorithms that require computation. Based on the pre-trained
model, we further propose refinement to achieve more accurately the likelihood
of logical operators for a given syndrome by directly sampling the stabilizer
operators. We perform numerical experiments on stabilizer codes with small code
distances, using both depolarizing error models and error models with
correlated noise. The results show that our approach provides significantly
better decoding accuracy than the minimum weight perfect matching and
belief-propagation-based algorithms. Our framework is general and can be
applied to any error model and quantum codes with different topologies such as
surface codes and quantum LDPC codes. Furthermore, it leverages the
parallelization capabilities of GPUs, enabling simultaneous decoding of a large
number of syndromes. Our approach sheds light on the efficient and accurate
decoding of quantum error-correcting codes using generative artificial
intelligence and modern computational power.Comment: Comments are welcom
Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning
The paper introduces the application of information geometry to describe the
ground states of Ising models by utilizing parity-check matrices of cyclic and
quasi-cyclic codes on toric and spherical topologies. The approach establishes
a connection between machine learning and error-correcting coding. This
proposed approach has implications for the development of new embedding methods
based on trapping sets. Statistical physics and number geometry applied for
optimize error-correcting codes, leading to these embedding and sparse
factorization methods. The paper establishes a direct connection between DNN
architecture and error-correcting coding by demonstrating how state-of-the-art
architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range
arena can be equivalent to of block and convolutional LDPC codes (Cage-graph,
Repeat Accumulate). QC codes correspond to certain types of chemical elements,
with the carbon element being represented by the mixed automorphism
Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and
the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix
are elaborated upon in detail. The Quantum Approximate Optimization Algorithm
(QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous
to the back-propagation loss function landscape in training DNNs. This
similarity creates a comparable problem with TS pseudo-codeword, resembling the
belief propagation method. Additionally, the layer depth in QAOA correlates to
the number of decoding belief propagation iterations in the Wiberg decoding
tree. Overall, this work has the potential to advance multiple fields, from
Information Theory, DNN architecture design (sparse and structured prior graph
topology), efficient hardware design for Quantum and Classical DPU/TPU (graph,
quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text
overlap with arXiv:2109.08184 by other author
Deep Q-learning decoder for depolarizing noise on the toric code
We present an AI-based decoding agent for quantum error correction of
depolarizing noise on the toric code. The agent is trained using deep
reinforcement learning (DRL), where an artificial neural network encodes the
state-action Q-values of error-correcting , , and Pauli operations,
occurring with probabilities , , and , respectively. By learning
to take advantage of the correlations between bit-flip and phase-flip errors,
the decoder outperforms the minimum-weight-perfect-matching (MWPM) algorithm,
achieving higher success rate and higher error threshold for depolarizing noise
(), for code distances . The decoder trained on
depolarizing noise also has close to optimal performance for uncorrelated noise
and provides functional but sub-optimal decoding for biased noise (). We argue that the DRL-type decoder provides a promising framework
for future practical error correction of topological codes, striking a balance
between on-the-fly calculations, in the form of forward evaluation of a deep
Q-network, and pre-training and information storage. The complete code, as well
as ready-to-use decoders (pre-trained networks), can be found in the repository
https://github.com/mats-granath/toric-RL-decoder.Comment: 8+10 pages, 10+8 figure
Check-Agnosia based Post-Processor for Message-Passing Decoding of Quantum LDPC Codes
The inherent degeneracy of quantum low-density parity-check codes poses a
challenge to their decoding, as it significantly degrades the error-correction
performance of classical message-passing decoders. To improve their
performance, a post-processing algorithm is usually employed. To narrow the gap
between algorithmic solutions and hardware limitations, we introduce a new
post-processing algorithm with a hardware-friendly orientation, providing error
correction performance competitive to the state-of-the-art techniques. The
proposed post-processing, referred to as check-agnosia, is inspired by
stabilizer-inactivation, while considerably reducing the required hardware
resources, and providing enough flexibility to allow different message-passing
schedules and hardware architectures. We carry out a detailed analysis for a
set of Pareto architectures with different tradeoffs between latency and power
consumption, derived from the results of implemented designs on an FPGA board.
We show that latency values close to one microsecond can be obtained on the
FPGA board, and provide evidence that much lower latency values can be obtained
for ASIC implementations. In the process, we also demonstrate the practical
implications of the recently introduced t-covering layers and random-order
layered scheduling
Generative Street Addresses from Satellite Imagery
We describe our automatic generative algorithm to create street addresses from satellite images by learning and labeling roads, regions, and address cells. Currently, 75% of the worldās roads lack adequate street addressing systems. Recent geocoding initiatives tend to convert pure latitude and longitude information into a memorable form for unknown areas. However, settlements are identified by streets, and such addressing schemes are not coherent with the road topology. Instead, we propose a generative address design that maps the globe in accordance with streets. Our algorithm starts with extracting roads from satellite imagery by utilizing deep learning. Then, it uniquely labels the regions, roads, and structures using some graph- and proximity-based algorithms. We also extend our addressing scheme to (i) cover inaccessible areas following similar design principles; (ii) be inclusive and flexible for changes on the ground; and (iii) lead as a pioneer for a unified street-based global geodatabase. We present our results on an example of a developed city and multiple undeveloped cities. We also compare productivity on the basis of current ad hoc and new complete addresses. We conclude by contrasting our generative addresses to current industrial and open solutions. Keywords: road extraction; remote sensing; satellite imagery; machine learning; supervised learning; generative schemes; automatic geocodin
- ā¦