93 research outputs found
Noise-aided gradient descent bit-flipping decoders approaching maximum likelihood decoding
International audienceIn the recent literature, the study of iterative LDPC decoders implemented on faulty-hardware has led to the counter-intuitive conclusion that noisy decoders could perform better than their noiseless version. This peculiar behavior has been observed in the finite codeword length regime, where the noise perturbating the decoder dynamics help to escape the attraction of fixed points such as trapping sets. In this paper, we will study two recently introduced LDPC decoders derived from noisy versions of the gradient descent bit-flipping decoder (GDBF). Although the GDBF is known to be a simple decoder with limited error correction capability compared to more powerful soft-decision decoders, it has been shown that the introduction of a random perturbation in the decoder could greatly improve the performance results, approaching and even surpassing belief propagation or min-sum based decoders. For both decoders, we evaluate the probability of escaping from a Trapping set, and relate this probability to the parameters of the injected noise distribution, using a Markovian model of the decoder transitions in the state space of errors localized on isolated trapping sets. In a second part of the paper, we present a modified scheduling of our algorithms for the binary symmetric channel, which allows to approach maximum likelihood decoding (MLD) at the cost of a very large number of iterations
Scalable Neural Network Decoders for Higher Dimensional Quantum Codes
Machine learning has the potential to become an important tool in quantum
error correction as it allows the decoder to adapt to the error distribution of
a quantum chip. An additional motivation for using neural networks is the fact
that they can be evaluated by dedicated hardware which is very fast and
consumes little power. Machine learning has been previously applied to decode
the surface code. However, these approaches are not scalable as the training
has to be redone for every system size which becomes increasingly difficult. In
this work the existence of local decoders for higher dimensional codes leads us
to use a low-depth convolutional neural network to locally assign a likelihood
of error on each qubit. For noiseless syndrome measurements, numerical
simulations show that the decoder has a threshold of around when
applied to the 4D toric code. When the syndrome measurements are noisy, the
decoder performs better for larger code sizes when the error probability is
low. We also give theoretical and numerical analysis to show how a
convolutional neural network is different from the 1-nearest neighbor
algorithm, which is a baseline machine learning method
Decoding LDPC Codes with Probabilistic Local Maximum Likelihood Bit Flipping
Communication channels are inherently noisy making error correction coding a major topic of research for modern communication systems. Error correction coding is the addition of redundancy to information transmitted over communication channels to enable detection and recovery of erroneous information. Low-density parity-check (LDPC) codes are a class of error correcting codes that have been effective in maintaining reliability of information transmitted over communication channels. Multiple algorithms have been developed to benefit from the LDPC coding scheme to improve recovery of erroneous information. This work develops a matrix construction that stores the information error probability statistics for a communication channel. This combined with the error correcting capability of LDPC codes enabled the development of the Probabilistic Local Maximum Likelihood Bit Flipping (PLMLBF) algorithm, which is the focus of this research work
Error-Floors of the 802.3an LDPC Code for Noise Assisted Decoding
In digital communication, information is sent as bits, which is corrupted by the noise present in wired/wireless medium known as the channel. The Low Density Parity Check (LDPC) codes are a family of error correction codes used in communication systems to detect and correct erroneous data at the receiver. Data is encoded with error correction coding at the transmitter and decoded at the receiver. The Noisy Gradient Descent BitFlip (NGDBF) decoding algorithm is a new algorithm with excellent decoding performance with relatively low implementation requirements. This dissertation aims to characterize the performance of the NGDBF algorithm. A simple improvement over NGDBF called the Re-decoded NGDBF (R-NGDBF) is proposed to enhance the performance of NGDBF decoding algorithm. A general method to estimate the decoding parameters of NGDBF is presented. The estimated parameters are then verified in a hardware implementation of the decoder to validate the accuracy of the estimation technique
Exploiting Degeneracy in Belief Propagation Decoding of Quantum Codes
Quantum information needs to be protected by quantum error-correcting codes
due to imperfect physical devices and operations. One would like to have an
efficient and high-performance decoding procedure for the class of quantum
stabilizer codes. A potential candidate is Pearl's belief propagation (BP), but
its performance suffers from the many short cycles inherent in a quantum
stabilizer code, especially highly-degenerate codes. A general impression
exists that BP is not effective for topological codes. In this paper, we
propose a decoding algorithm for quantum codes based on quaternary BP with
additional memory effects (called MBP). This MBP is like a recursive neural
network with inhibitions between neurons (edges with negative weights), which
enhance the perception capability of a network. Moreover, MBP exploits the
degeneracy of a quantum code so that the most probable error or its degenerate
errors can be found with high probability. The decoding performance is
significantly improved over the conventional BP for various quantum codes,
including quantum bicycle, hypergraph-product, surface and toric codes. For MBP
on the surface and toric codes over depolarizing errors, we observe error
thresholds of 16% and 17.5%, respectively.Comment: 22 pages, 25 figures, 3 tables, and 3 algorithm
- …