4,500 research outputs found
Performance Analysis of Coded -ary Orthogonal Signaling Using Errors-and Erasures Decoding Over Frequency-Selective Fading Channels
The performance of -ary orthogonal signaling schemes employing ReedâSolomon (RS) codes and redundant residue number system (RRNS) codes is investigated over frequency-selective Rayleigh fading channels. âErrors-and-erasuresâ decoding is considered, where erasures are judged based on two low-complexity, low-delay erasure insertion schemesâViterbiâs ratio threshold test (RTT) and the proposed output threshold test (OTT). The probability density functions (PDF) of the ratio associated with the RTT and that of the demodulation output in the OTT conditioned on both the correct detection and erroneous detection of -ary signals are derived, and the characteristics of the RTT and OTT are investigated. Furthermore, expressions are derived for computing the codeword decoding error probability of RS codes or RRNS codes based on the above PDFs. The OTT technique is compared to Viterbiâs RTT, and both of these are compared to receivers using âerror-correction onlyâ decoding over frequency-selective Rayleigh-fading channels. The numerical results show that by using âerrors-and-erasuresâ decoding, RS or RRNS codes of a given code rate can achieve higher coding gain than that without erasure information, and that the OTT technique outperforms the RTT, provided that both schemes are operated at the optimum decision thresholds. Index TermsââErrors-and-erasuresâ decoding, -ary orthogonal signaling, Rayleigh fading, redundant residue number system codes, ReedâSolomon codes
Construction of Near-Optimum Burst Erasure Correcting Low-Density Parity-Check Codes
In this paper, a simple, general-purpose and effective tool for the design of
low-density parity-check (LDPC) codes for iterative correction of bursts of
erasures is presented. The design method consists in starting from the
parity-check matrix of an LDPC code and developing an optimized parity-check
matrix, with the same performance on the memory-less erasure channel, and
suitable also for the iterative correction of single bursts of erasures. The
parity-check matrix optimization is performed by an algorithm called pivot
searching and swapping (PSS) algorithm, which executes permutations of
carefully chosen columns of the parity-check matrix, after a local analysis of
particular variable nodes called stopping set pivots. This algorithm can be in
principle applied to any LDPC code. If the input parity-check matrix is
designed for achieving good performance on the memory-less erasure channel,
then the code obtained after the application of the PSS algorithm provides good
joint correction of independent erasures and single erasure bursts. Numerical
results are provided in order to show the effectiveness of the PSS algorithm
when applied to different categories of LDPC codes.Comment: 15 pages, 4 figures. IEEE Trans. on Communications, accepted
(submitted in Feb. 2007
Optimal Codes for the Burst Erasure Channel
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths
Protograph-based Quasi-Cyclic MDPC Codes for McEliece Cryptosystems
In this paper, ensembles of quasi-cyclic moderate-density parity-check (MDPC)
codes based on protographs are introduced and analyzed in the context of a
McEliece-like cryptosystem. The proposed ensembles significantly improve the
error correction capability of the regular MDPC code ensembles that are
currently considered for post-quantum cryptosystems without increasing the
public key size. The proposed ensembles are analyzed in the asymptotic setting
via density evolution, both under the sum-product algorithm and a
low-complexity (error-and-erasure) message passing algorithm. The asymptotic
analysis is complemented at finite block lengths by Monte Carlo simulations.
The enhanced error correction capability remarkably improves the scheme
robustness with respect to (known) decoding attacks.Comment: 5 page
GLDPC-Staircase AL-FEC codes: A Fundamental study and New results
International audienceThis paper provides fundamentals in the design and analysis of Generalized Low Density Parity Check (GLDPC)-Staircase codes over the erasure channel. These codes are constructed by extending an LDPC-Staircase code (base code) using Reed Solomon (RS) codes (outer codes) in order to benefit from more powerful decoders. The GLDPC-Staircase coding scheme adds, in addition to the LDPC-Staircase repair symbols, extra-repair symbols that can be produced on demand and in large quantities, which provides small rate capabilities. Therefore, these codes are extremely flexible as they can be tuned to behave either like predefined rate LDPC-Staircase codes at one extreme, or like a single RS code at another extreme, or like small rate codes. Concerning the code design, we show that RS codes with " quasi " Hankel matrix-based construction fulfill the desired structure properties, and that a hybrid (IT/RS/ML) decoding is feasible that achieves Maximum Likelihood (ML) correction capabilities at a lower complexity. Concerning performance analysis, we detail an asymptotic analysis method based on Density evolution (DE), EXtrinsic Information Transfer (EXIT) and the area theorem. Based on several asymptotic and finite length results, after selecting the optimal internal parameters, we demonstrate that GLDPC-Staircase codes feature excellent erasure recovery capabilities, close to that of ideal codes, both with large and very small objects. From this point of view they outperform LDPC-Staircase and Raptor codes, and achieve correction capabilities close to those of RaptorQ codes. Therefore all these results make GLDPC-Staircase codes a universal Application-Layer FEC (AL-FEC) solution for many situations that require erasure protection such as media streaming or file multicast transmission
A Decoding Algorithm for LDPC Codes Over Erasure Channels with Sporadic Errors
none4An efficient decoding algorithm for low-density parity-check (LDPC) codes on erasure channels with sporadic errors (i.e., binary error-and-erasure channels with error probability much smaller than the erasure probability) is proposed and its performance analyzed. A general single-error multiple-erasure (SEME) decoding algorithm is first described, which may be in principle used with any binary linear block code. The algorithm is optimum whenever the non-erased part of the received word is affected by at most one error, and is capable of performing error detection of multiple errors. An upper bound on the average block error probability under SEME decoding is derived for the linear random code ensemble. The bound is tight and easy to implement. The algorithm is then adapted to LDPC codes, resulting in a simple modification to a previously proposed efficient maximum likelihood LDPC erasure decoder which exploits the parity-check matrix sparseness. Numerical results reveal that LDPC codes under efficient SEME decoding can closely approach the average performance of random codes.noneG. Liva; E. Paolini; B. Matuz; M. ChianiG. Liva; E. Paolini; B. Matuz; M. Chian
Erasure Codes with a Banded Structure for Hybrid Iterative-ML Decoding
This paper presents new FEC codes for the erasure channel, LDPC-Band, that
have been designed so as to optimize a hybrid iterative-Maximum Likelihood (ML)
decoding. Indeed, these codes feature simultaneously a sparse parity check
matrix, which allows an efficient use of iterative LDPC decoding, and a
generator matrix with a band structure, which allows fast ML decoding on the
erasure channel. The combination of these two decoding algorithms leads to
erasure codes achieving a very good trade-off between complexity and erasure
correction capability.Comment: 5 page
Iterative Quantization Using Codes On Graphs
We study codes on graphs combined with an iterative message passing algorithm
for quantization. Specifically, we consider the binary erasure quantization
(BEQ) problem which is the dual of the binary erasure channel (BEC) coding
problem. We show that duals of capacity achieving codes for the BEC yield codes
which approach the minimum possible rate for the BEQ. In contrast, low density
parity check codes cannot achieve the minimum rate unless their density grows
at least logarithmically with block length. Furthermore, we show that duals of
efficient iterative decoding algorithms for the BEC yield efficient encoding
algorithms for the BEQ. Hence our results suggest that graphical models may
yield near optimal codes in source coding as well as in channel coding and that
duality plays a key role in such constructions.Comment: 10 page
- âŠ