25 research outputs found
Symbol Message Passing Decoding of Nonbinary Low-Density Parity-Check Codes
We present a novel decoding algorithm for q-ary low-density parity-check codes, termed symbol message passing. The proposed algorithm can be seen as a generalization of Gallager B and the binary message passing algorithm by Lechner et al. to q-ary codes. We derive density evolution equations for the q-ary symmetric channel, compute thresholds for a number of regular low-density parity-check code ensembles, and verify those by Monte Carlo simulations of long channel codes. The proposed algorithm shows performance advantages with respect to an algorithm of comparable complexity from the literature
Decomposition Methods for Large Scale LP Decoding
When binary linear error-correcting codes are used over symmetric channels, a
relaxed version of the maximum likelihood decoding problem can be stated as a
linear program (LP). This LP decoder can be used to decode error-correcting
codes at bit-error-rates comparable to state-of-the-art belief propagation (BP)
decoders, but with significantly stronger theoretical guarantees. However, LP
decoding when implemented with standard LP solvers does not easily scale to the
block lengths of modern error correcting codes. In this paper we draw on
decomposition methods from optimization theory, specifically the Alternating
Directions Method of Multipliers (ADMM), to develop efficient distributed
algorithms for LP decoding.
The key enabling technical result is a "two-slice" characterization of the
geometry of the parity polytope, which is the convex hull of all codewords of a
single parity check code. This new characterization simplifies the
representation of points in the polytope. Using this simplification, we develop
an efficient algorithm for Euclidean norm projection onto the parity polytope.
This projection is required by ADMM and allows us to use LP decoding, with all
its theoretical guarantees, to decode large-scale error correcting codes
efficiently.
We present numerical results for LDPC codes of lengths more than 1000. The
waterfall region of LP decoding is seen to initiate at a slightly higher
signal-to-noise ratio than for sum-product BP, however an error floor is not
observed for LP decoding, which is not the case for BP. Our implementation of
LP decoding using ADMM executes as fast as our baseline sum-product BP decoder,
is fully parallelizable, and can be seen to implement a type of message-passing
with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the
49th Annual Allerton Conference, September 2011. This version to appear in
IEEE Transactions on Information Theor
Refined Reliability Combining for Binary Message Passing Decoding of Product Codes
We propose a novel soft-aided iterative decoding algorithm for product codes
(PCs). The proposed algorithm, named iterative bounded distance decoding with
combined reliability (iBDD-CR), enhances the conventional iterative bounded
distance decoding (iBDD) of PCs by exploiting some level of soft information.
In particular, iBDD-CR can be seen as a modification of iBDD where the hard
decisions of the row and column decoders are made based on a reliability
estimate of the BDD outputs. The reliability estimates are derived using
extrinsic message passing for generalized low-density-parity check (GLDPC)
ensembles, which encompass PCs. We perform a density evolution analysis of
iBDD-CR for transmission over the additive white Gaussian noise channel for the
GLDPC ensemble. We consider both binary transmission and bit-interleaved coded
modulation with quadrature amplitude modulation.We show that iBDD-CR achieves
performance gains up to dB compared to iBDD with the same internal
decoder data flow. This makes the algorithm an attractive solution for very
high-throughput applications such as fiber-optic communications
PARALLEL SUBSPACE SUBCODES OF REED-SOLOMON CODES FOR MAGNETIC RECORDING CHANNELS
Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code lower the error-floor at high signal-to-noise ratio (SNR) at the price of a reduced coding gain and a less sharp waterfall region at lower SNR. This architecture fails to deal with the error floor problem when the number of errors caused by multiple dominant trapping sets is beyond the error correction capability of the outer RS code. The ultimate goal of a sharper waterfall at the low SNR region and a lower error floor at high SNR can be approached by introducing a parallel subspace subcode RS (SSRS) code (PSSRS) to replace the conventional RS code. In this new LDPC+PSSRS system, the PSSRS code can help localize and partially destroy the most dominant trapping sets. With the proposed iterative parallel local decoding algorithm, the LDPC decoder can correct the remaining errors by itself. The contributions of this work are: 1) We propose a PSSRS code with parallel local SSRS structure and a three-level decoding architecture, which enables a trade off between performance and complexity; 2) We propose a new LDPC+PSSRS system with a new iterative parallel local decoding algorithm with a 0.5dB+ gain over the conventional two-level system. Its performance for 4K-byte sectors is close to the multiple LDPC-only architectures for perpendicular magneticxviiirecording channels; 3) We develop a new decoding concept that changes the major role of the RS code from error correcting to a "partial" trapping set destroyer
Device-Independent Quantum Key Distribution
Cryptographic key exchange protocols traditionally rely on computational
conjectures such as the hardness of prime factorisation to provide security
against eavesdropping attacks. Remarkably, quantum key distribution protocols
like the one proposed by Bennett and Brassard provide information-theoretic
security against such attacks, a much stronger form of security unreachable by
classical means. However, quantum protocols realised so far are subject to a
new class of attacks exploiting implementation defects in the physical devices
involved, as demonstrated in numerous ingenious experiments. Following the
pioneering work of Ekert proposing the use of entanglement to bound an
adversary's information from Bell's theorem, we present here the experimental
realisation of a complete quantum key distribution protocol immune to these
vulnerabilities. We achieve this by combining theoretical developments on
finite-statistics analysis, error correction, and privacy amplification, with
an event-ready scheme enabling the rapid generation of high-fidelity
entanglement between two trapped-ion qubits connected by an optical fibre link.
The secrecy of our key is guaranteed device-independently: it is based on the
validity of quantum theory, and certified by measurement statistics observed
during the experiment. Our result shows that provably secure cryptography with
real-world devices is possible, and paves the way for further quantum
information applications based on the device-independence principle.Comment: 5+1 pages in main text and methods with 4 figures and 1 table; 37
pages of supplementary materia