5,963 research outputs found
Structural Design and Analysis of Low-Density Parity-Check Codes and Systematic Repeat-Accumulate Codes
The discovery of two fundamental error-correcting code families, known as turbo codes and low-density parity-check (LDPC) codes, has led to a revolution in coding theory and to a paradigm shift from traditional algebraic codes towards modern graph-based codes that can be decoded by iterative message passing algorithms.
From then on, it has become a focal point of research to develop powerful LDPC and turbo-like codes.
Besides the classical domain of randomly constructed codes, an alternative and competitive line of research is concerned with highly structured LDPC and turbo-like codes based on combinatorial designs.
Such codes are typically characterized by high code rates already at small to moderate code lengths and good code properties such as the avoidance of harmful 4-cycles in the code's factor graph.
Furthermore, their structure can usually be exploited for an efficient implementation, in particular, they can be encoded with low complexity as opposed to random-like codes. Hence, these codes are suitable for high-speed applications such as magnetic recording or optical communication.
This thesis greatly contributes to the field of structured LDPC codes and systematic repeat-accumulate (sRA) codes as a subclass of turbo-like codes by presenting new combinatorial construction techniques and algebraic methods for an improved code design.
More specifically, novel and infinite families of high-rate structured LDPC codes and sRA codes are presented based on balanced incomplete block designs (BIBDs), which form a subclass of combinatorial designs. Besides of showing excellent error-correcting capabilites under iterative decoding, these codes can be implemented efficiently, since their inner structure enables low-complexity encoding and accelerated decoding algorithms.
A further infinite series of structured LDPC codes is presented based on the notion of transversal designs, which form another subclass of combinatorial designs. By a proper configuration of these codes, they reveal an excellent decoding performance under iterative decoding, in particular, with very low error-floors.
The approach for lowering these error-floors is threefold. First, a thorough analysis of the decoding failures is carried out, resulting in an extensive classification of so-called stopping sets and absorbing sets. These combinatorial entities are known to be the main cause of decoding failures in the error-floor region over the binary erasure channel (BEC) and additive white Gaussian noise (AWGN) channel, respectively. Second, the specific code structures are exploited in order to calculate conditions for the avoidance of the most harmful stopping and absorbing sets. Third, powerful design strategies are derived for the identification of those code instances with the best error-floor performances.
The resulting codes can additionally be encoded with low complexity and thus are ideally suited for practical high-speed applications.
Further investigations are carried out on the infinite family of structured LDPC codes based on finite geometries. It is known that these codes perform very well under iterative decoding and that their encoding can be achieved with low complexity. By combining the latest findings in the fields of finite geometries and combinatorial designs, we generate new theoretical insights about the decoding failures of such codes under iterative decoding. These examinations finally help to identify the geometric codes with the most beneficial error-correcting capabilities over the BEC
Erasure Codes with a Banded Structure for Hybrid Iterative-ML Decoding
This paper presents new FEC codes for the erasure channel, LDPC-Band, that
have been designed so as to optimize a hybrid iterative-Maximum Likelihood (ML)
decoding. Indeed, these codes feature simultaneously a sparse parity check
matrix, which allows an efficient use of iterative LDPC decoding, and a
generator matrix with a band structure, which allows fast ML decoding on the
erasure channel. The combination of these two decoding algorithms leads to
erasure codes achieving a very good trade-off between complexity and erasure
correction capability.Comment: 5 page
Bilayer Protograph Codes for Half-Duplex Relay Channels
Despite encouraging advances in the design of relay codes, several important
challenges remain. Many of the existing LDPC relay codes are tightly optimized
for fixed channel conditions and not easily adapted without extensive
re-optimization of the code. Some have high encoding complexity and some need
long block lengths to approach capacity. This paper presents a high-performance
protograph-based LDPC coding scheme for the half-duplex relay channel that
addresses simultaneously several important issues: structured coding that
permits easy design, low encoding complexity, embedded structure for convenient
adaptation to various channel conditions, and performance close to capacity
with a reasonable block length. The application of the coding structure to
multi-relay networks is demonstrated. Finally, a simple new methodology for
evaluating the end-to-end error performance of relay coding systems is
developed and used to highlight the performance of the proposed codes.Comment: Accepted in IEEE Trans. Wireless Com
On a Low-Rate TLDPC Code Ensemble and the Necessary Condition on the Linear Minimum Distance for Sparse-Graph Codes
This paper addresses the issue of design of low-rate sparse-graph codes with
linear minimum distance in the blocklength. First, we define a necessary
condition which needs to be satisfied when the linear minimum distance is to be
ensured. The condition is formulated in terms of degree-1 and degree-2 variable
nodes and of low-weight codewords of the underlying code, and it generalizies
results known for turbo codes [8] and LDPC codes. Then, we present a new
ensemble of low-rate codes, which itself is a subclass of TLDPC codes [4], [5],
and which is designed under this necessary condition. The asymptotic analysis
of the ensemble shows that its iterative threshold is situated close to the
Shannon limit. In addition to the linear minimum distance property, it has a
simple structure and enjoys a low decoding complexity and a fast convergence.Comment: submitted to IEEE Trans. on Communication
A New Class of Multiple-rate Codes Based on Block Markov Superposition Transmission
Hadamard transform~(HT) as over the binary field provides a natural way to
implement multiple-rate codes~(referred to as {\em HT-coset codes}), where the
code length is fixed but the code dimension can be varied from
to by adjusting the set of frozen bits. The HT-coset codes, including
Reed-Muller~(RM) codes and polar codes as typical examples, can share a pair of
encoder and decoder with implementation complexity of order .
However, to guarantee that all codes with designated rates perform well,
HT-coset coding usually requires a sufficiently large code length, which in
turn causes difficulties in the determination of which bits are better for
being frozen. In this paper, we propose to transmit short HT-coset codes in the
so-called block Markov superposition transmission~(BMST) manner. At the
transmitter, signals are spatially coupled via superposition, resulting in long
codes. At the receiver, these coupled signals are recovered by a sliding-window
iterative soft successive cancellation decoding algorithm. Most importantly,
the performance around or below the bit-error-rate~(BER) of can be
predicted by a simple genie-aided lower bound. Both these bounds and simulation
results show that the BMST of short HT-coset codes performs well~(within one dB
away from the corresponding Shannon limits) in a wide range of code rates
Minimum-Variance Importance-Sampling Bernoulli Estimator for Fast Simulation of Linear Block Codes over Binary Symmetric Channels
In this paper the choice of the Bernoulli distribution as biased distribution
for importance sampling (IS) Monte-Carlo (MC) simulation of linear block codes
over binary symmetric channels (BSCs) is studied. Based on the analytical
derivation of the optimal IS Bernoulli distribution, with explicit calculation
of the variance of the corresponding IS estimator, two novel algorithms for
fast-simulation of linear block codes are proposed. For sufficiently high
signal-to-noise ratios (SNRs) one of the proposed algorithm is SNR-invariant,
i.e. the IS estimator does not depend on the cross-over probability of the
channel. Also, the proposed algorithms are shown to be suitable for the
estimation of the error-correcting capability of the code and the decoder.
Finally, the effectiveness of the algorithms is confirmed through simulation
results in comparison to standard Monte Carlo method
- …