12,488 research outputs found
Improved Decoding of Staircase Codes: The Soft-aided Bit-marking (SABM) Algorithm
Staircase codes (SCCs) are typically decoded using iterative bounded-distance
decoding (BDD) and hard decisions. In this paper, a novel decoding algorithm is
proposed, which partially uses soft information from the channel. The proposed
algorithm is based on marking certain number of highly reliable and highly
unreliable bits. These marked bits are used to improve the
miscorrection-detection capability of the SCC decoder and the error-correcting
capability of BDD. For SCCs with -error-correcting
Bose-Chaudhuri-Hocquenghem component codes, our algorithm improves upon
standard SCC decoding by up to ~dB at a bit-error rate (BER) of
. The proposed algorithm is shown to achieve almost half of the gain
achievable by an idealized decoder with this structure. A complexity analysis
based on the number of additional calls to the component BDD decoder shows that
the relative complexity increase is only around at a BER of .
This additional complexity is shown to decrease as the channel quality
improves. Our algorithm is also extended (with minor modifications) to product
codes. The simulation results show that in this case, the algorithm offers
gains of up to ~dB at a BER of .Comment: 10 pages, 12 figure
Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization
The normalized min-sum algorithm can achieve near-optimal performance at
decoding LDPC codes. However, it is a critical question to understand the
mathematical principle underlying the algorithm. Traditionally, people thought
that the normalized min-sum algorithm is a good approximation to the
sum-product algorithm, the best known algorithm for decoding LDPC codes and
Turbo codes. This paper offers an alternative approach to understand the
normalized min-sum algorithm. The algorithm is derived directly from
cooperative optimization, a newly discovered general method for
global/combinatorial optimization. This approach provides us another
theoretical basis for the algorithm and offers new insights on its power and
limitation. It also gives us a general framework for designing new decoding
algorithms.Comment: Accepted by IEEE Information Theory Workshop, Chengdu, China, 200
Fast performance estimation of block codes
Importance sampling is used in this paper to address the classical yet important problem of performance estimation of block codes. Simulation distributions that comprise discreteand continuous-mixture probability densities are motivated and used for this application. These mixtures are employed in concert with the so-called g-method, which is a conditional importance sampling technique that more effectively exploits knowledge of underlying input distributions. For performance estimation, the emphasis is on bit by bit maximum a-posteriori probability decoding, but message passing algorithms for certain codes have also been investigated. Considered here are single parity check codes, multidimensional product codes, and briefly, low-density parity-check codes. Several error rate results are presented for these various codes, together with performances of the simulation techniques
Spatially Coupled Codes and Optical Fiber Communications: An Ideal Match?
In this paper, we highlight the class of spatially coupled codes and discuss
their applicability to long-haul and submarine optical communication systems.
We first demonstrate how to optimize irregular spatially coupled LDPC codes for
their use in optical communications with limited decoding hardware complexity
and then present simulation results with an FPGA-based decoder where we show
that very low error rates can be achieved and that conventional block-based
LDPC codes can be outperformed. In the second part of the paper, we focus on
the combination of spatially coupled LDPC codes with different demodulators and
detectors, important for future systems with adaptive modulation and for
varying channel characteristics. We demonstrate that SC codes can be employed
as universal, channel-agnostic coding schemes.Comment: Invited paper to be presented in the special session on "Signal
Processing, Coding, and Information Theory for Optical Communications" at
IEEE SPAWC 201
Effects of noise on quantum error correction algorithms
It has recently been shown that there are efficient algorithms for quantum
computers to solve certain problems, such as prime factorization, which are
intractable to date on classical computers. The chances for practical
implementation, however, are limited by decoherence, in which the effect of an
external environment causes random errors in the quantum calculation. To combat
this problem, quantum error correction schemes have been proposed, in which a
single quantum bit (qubit) is ``encoded'' as a state of some larger number of
qubits, chosen to resist particular types of errors. Most such schemes are
vulnerable, however, to errors in the encoding and decoding itself. We examine
two such schemes, in which a single qubit is encoded in a state of qubits
while subject to dephasing or to arbitrary isotropic noise. Using both
analytical and numerical calculations, we argue that error correction remains
beneficial in the presence of weak noise, and that there is an optimal time
between error correction steps, determined by the strength of the interaction
with the environment and the parameters set by the encoding.Comment: 26 pages, LaTeX, 4 PS figures embedded. Reprints available from the
authors or http://eve.physics.ox.ac.uk/QChome.htm
- …