29,338 research outputs found
Quantized Iterative Message Passing Decoders with Low Error Floor for LDPC Codes
The error floor phenomenon observed with LDPC codes and their graph-based,
iterative, message-passing (MP) decoders is commonly attributed to the
existence of error-prone substructures -- variously referred to as near
codewords, trapping sets, absorbing sets, or pseudocodewords -- in a Tanner
graph representation of the code. Many approaches have been proposed to lower
the error floor by designing new LDPC codes with fewer such substructures or by
modifying the decoding algorithm. Using a theoretical analysis of iterative MP
decoding in an idealized trapping set scenario, we show that a contributor to
the error floors observed in the literature may be the imprecise implementation
of decoding algorithms and, in particular, the message quantization rules used.
We then propose a new quantization method -- (q+1)-bit quasi-uniform
quantization -- that efficiently increases the dynamic range of messages,
thereby overcoming a limitation of conventional quantization schemes. Finally,
we use the quasi-uniform quantizer to decode several LDPC codes that suffer
from high error floors with traditional fixed-point decoder implementations.
The performance simulation results provide evidence that the proposed
quantization scheme can, for a wide variety of codes, significantly lower error
floors with minimal increase in decoder complexity
The approximate maximum-likelihood certificate
A new property which relies on the linear programming (LP) decoder, the
approximate maximum-likelihood certificate (AMLC), is introduced. When using
the belief propagation decoder, this property is a measure of how close the
decoded codeword is to the LP solution. Using upper bounding techniques, it is
demonstrated that the conditional frame error probability given that the AMLC
holds is, with some degree of confidence, below a threshold. In channels with
low noise, this threshold is several orders of magnitude lower than the
simulated frame error rate, and our bound holds with very high degree of
confidence. In contrast, showing this error performance by simulation would
require very long Monte Carlo runs. When the AMLC holds, our approach thus
provides the decoder with extra error detection capability, which is especially
important in applications requiring high data integrity
Distributed Arithmetic Coding for the Asymmetric Slepian-Wolf problem
Distributed source coding schemes are typically based on the use of channels
codes as source codes. In this paper we propose a new paradigm, termed
"distributed arithmetic coding", which exploits the fact that arithmetic codes
are good source as well as channel codes. In particular, we propose a
distributed binary arithmetic coder for Slepian-Wolf coding with decoder side
information, along with a soft joint decoder. The proposed scheme provides
several advantages over existing Slepian-Wolf coders, especially its good
performance at small block lengths, and the ability to incorporate arbitrary
source models in the encoding process, e.g. context-based statistical models.
We have compared the performance of distributed arithmetic coding with turbo
codes and low-density parity-check codes, and found that the proposed approach
has very competitive performance.Comment: submitted to IEEE Transactions on Signal processing, Nov. 2007.
Revised version accepted with minor revision
Serial Concatenation of RS Codes with Kite Codes: Performance Analysis, Iterative Decoding and Design
In this paper, we propose a new ensemble of rateless forward error correction
(FEC) codes. The proposed codes are serially concatenated codes with
Reed-Solomon (RS) codes as outer codes and Kite codes as inner codes. The inner
Kite codes are a special class of prefix rateless low-density parity-check
(PRLDPC) codes, which can generate potentially infinite (or as many as
required) random-like parity-check bits. The employment of RS codes as outer
codes not only lowers down error-floors but also ensures (with high
probability) the correctness of successfully decoded codewords. In addition to
the conventional two-stage decoding, iterative decoding between the inner code
and the outer code are also implemented to improve the performance further. The
performance of the Kite codes under maximum likelihood (ML) decoding is
analyzed by applying a refined Divsalar bound to the ensemble weight
enumerating functions (WEF). We propose a simulation-based optimization method
as well as density evolution (DE) using Gaussian approximations (GA) to design
the Kite codes. Numerical results along with semi-analytic bounds show that the
proposed codes can approach Shannon limits with extremely low error-floors. It
is also shown by simulation that the proposed codes performs well within a wide
range of signal-to-noise-ratios (SNRs).Comment: 34 pages, 15 figure
Controlling the Error Floor in LDPC Decoding
The error floor of LDPC is revisited as an effect of dynamic message behavior
in the so-called absorption sets of the code. It is shown that if the signal
growth in the absorption sets is properly balanced by the growth of
set-external messages, the error floor can be lowered to essentially
arbitrarily low levels. Importance sampling techniques are discussed and used
to verify the analysis, as well as to discuss the impact of iterations and
message quantization on the code performance in the ultra-low BER (error floor)
regime.Comment: 11 pages, 7 figures, Submitted to IEEE Trans. Com
Block Markov Superposition Transmission of BCH Codes with Iterative Erasures-and-Errors Decoders
In this paper, we present the block Markov superposition transmission of BCH
(BMST-BCH) codes, which can be constructed to obtain a very low error floor. To
reduce the implementation complexity, we design a low complexity iterative
sliding-window decoding algorithm, in which only binary and/or erasure messages
are processed and exchanged between processing units. The error floor can be
predicted by a genie-aided lower bound, while the waterfall performance can be
analyzed by the density evolution method. To evaluate the error floor of the
constructed BMST-BCH codes at a very low bit error rate (BER) region, we
propose a fast simulation approach. Numerical results show that, at a target
BER of , the hard-decision decoding of the BMST-BCH codes with
overhead can achieve a net coding gain (NCG) of dB. Furthermore,
the soft-decision decoding can yield an NCG of dB. The construction of
BMST-BCH codes is flexible to trade off latency against performance at all
overheads of interest and may find applications in optical transport networks
as an attractive~candidate.Comment: submitted to IEEE Transactions on Communication
Rolex: Resilience-Oriented Language Extensions for Extreme-Scale Systems
Future exascale high-performance computing (HPC) systems will be constructed
from VLSI devices that will be less reliable than those used today, and faults
will become the norm, not the exception. This will pose significant problems
for system designers and programmers, who for half-a-century have enjoyed an
execution model that assumed correct behavior by the underlying computing
system. The mean time to failure (MTTF) of the system scales inversely to the
number of components in the system and therefore faults and resultant system
level failures will increase, as systems scale in terms of the number of
processor cores and memory modules used. However every error detected need not
cause catastrophic failure. Many HPC applications are inherently fault
resilient. Yet it is the application programmers who have this knowledge but
lack mechanisms to convey it to the system.
In this paper, we present new Resilience Oriented Language Extensions (Rolex)
which facilitate the incorporation of fault resilience as an intrinsic property
of the application code. We describe the syntax and semantics of the language
extensions as well as the implementation of the supporting compiler
infrastructure and runtime system. Our experiments show that an approach that
leverages the programmer's insight to reason about the context and significance
of faults to the application outcome significantly improves the probability
that an application runs to a successful conclusion
Linear code-based vector quantization for independent random variables
In this paper we analyze the rate-distortion function R(D) achievable using
linear codes over GF(q), where q is a prime number.Comment: 16 pages, 3 figure
Exhausting Error-Prone Patterns in LDPC Codes
It is proved in this work that exhaustively determining bad patterns in
arbitrary, finite low-density parity-check (LDPC) codes, including stopping
sets for binary erasure channels (BECs) and trapping sets (also known as
near-codewords) for general memoryless symmetric channels, is an NP-complete
problem, and efficient algorithms are provided for codes of practical short
lengths n~=500. By exploiting the sparse connectivity of LDPC codes, the
stopping sets of size <=13 and the trapping sets of size <=11 can be
efficiently exhaustively determined for the first time, and the resulting
exhaustive list is of great importance for code analysis and finite code
optimization. The featured tree-based narrowing search distinguishes this
algorithm from existing ones for which inexhaustive methods are employed. One
important byproduct is a pair of upper bounds on the bit-error rate (BER) &
frame-error rate (FER) iterative decoding performance of arbitrary codes over
BECs that can be evaluated for any value of the erasure probability, including
both the waterfall and the error floor regions. The tightness of these upper
bounds and the exhaustion capability of the proposed algorithm are proved when
combining an optimal leaf-finding module with the tree-based search. These
upper bounds also provide a worst-case-performance guarantee which is crucial
to optimizing LDPC codes for extremely low error rate applications, e.g.,
optical/satellite communications. Extensive numerical experiments are conducted
that include both randomly and algebraically constructed LDPC codes, the
results of which demonstrate the superior efficiency of the exhaustion
algorithm and its significant value for finite length code optimization.Comment: submitted to IEEE Trans. Information Theor
Joint Decoding of LDPC Codes and Finite-State Channels via Linear-Programming
This paper considers the joint-decoding (JD) problem for finite-state
channels (FSCs) and low-density parity-check (LDPC) codes. In the first part,
the linear-programming (LP) decoder for binary linear codes is extended to JD
of binary-input FSCs. In particular, we provide a rigorous definition of LP
joint-decoding pseudo-codewords (JD-PCWs) that enables evaluation of the
pairwise error probability between codewords and JD-PCWs in AWGN. This leads
naturally to a provable upper bound on decoder failure probability. If the
channel is a finite-state intersymbol interference channel, then the joint LP
decoder also has the maximum-likelihood (ML) certificate property and all
integer-valued solutions are codewords. In this case, the performance loss
relative to ML decoding can be explained completely by fractional-valued
JD-PCWs. After deriving these results, we discovered some elements were
equivalent to earlier work by Flanagan on LP receivers.
In the second part, we develop an efficient iterative solver for the joint LP
decoder discussed in the first part. In particular, we extend the approach of
iterative approximate LP decoding, proposed by Vontobel and Koetter and
analyzed by Burshtein, to this problem. By taking advantage of the dual-domain
structure of the JD-LP, we obtain a convergent iterative algorithm for joint LP
decoding whose structure is similar to BCJR-based turbo equalization (TE). The
result is a joint iterative decoder whose per-iteration complexity is similar
to that of TE but whose performance is similar to that of joint LP decoding.
The main advantage of this decoder is that it appears to provide the
predictability of joint LP decoding and superior performance with the
computational complexity of TE. One expected application is coding for magnetic
storage where the required block-error rate is extremely low and system
performance is difficult to verify by simulation.Comment: Accepted to IEEE Journal of Selected Topics in Signal Processing
(Special Issue on Soft Detection for Wireless Transmission
- β¦