311 research outputs found

    A Rate-Distortion Exponent Approach to Multiple Decoding Attempts for Reed-Solomon Codes

    Full text link
    Algorithms based on multiple decoding attempts of Reed-Solomon (RS) codes have recently attracted new attention. Choosing decoding candidates based on rate-distortion (R-D) theory, as proposed previously by the authors, currently provides the best performance-versus-complexity trade-off. In this paper, an analysis based on the rate-distortion exponent (RDE) is used to directly minimize the exponential decay rate of the error probability. This enables rigorous bounds on the error probability for finite-length RS codes and leads to modest performance gains. As a byproduct, a numerical method is derived that computes the rate-distortion exponent for independent non-identical sources. Analytical results are given for errors/erasures decoding.Comment: accepted for presentation at 2010 IEEE International Symposium on Information Theory (ISIT 2010), Austin TX, US

    On Multiple Decoding Attempts for Reed-Solomon Codes: A Rate-Distortion Approach

    Full text link
    One popular approach to soft-decision decoding of Reed-Solomon (RS) codes is based on using multiple trials of a simple RS decoding algorithm in combination with erasing or flipping a set of symbols or bits in each trial. This paper presents a framework based on rate-distortion (RD) theory to analyze these multiple-decoding algorithms. By defining an appropriate distortion measure between an error pattern and an erasure pattern, the successful decoding condition, for a single errors-and-erasures decoding trial, becomes equivalent to distortion being less than a fixed threshold. Finding the best set of erasure patterns also turns into a covering problem which can be solved asymptotically by rate-distortion theory. Thus, the proposed approach can be used to understand the asymptotic performance-versus-complexity trade-off of multiple errors-and-erasures decoding of RS codes. This initial result is also extended a few directions. The rate-distortion exponent (RDE) is computed to give more precise results for moderate blocklengths. Multiple trials of algebraic soft-decision (ASD) decoding are analyzed using this framework. Analytical and numerical computations of the RD and RDE functions are also presented. Finally, simulation results show that sets of erasure patterns designed using the proposed methods outperform other algorithms with the same number of decoding trials.Comment: to appear in the IEEE Transactions on Information Theory (Special Issue on Facets of Coding Theory: from Algorithms to Networks

    Optimal Threshold-Based Multi-Trial Error/Erasure Decoding with the Guruswami-Sudan Algorithm

    Full text link
    Traditionally, multi-trial error/erasure decoding of Reed-Solomon (RS) codes is based on Bounded Minimum Distance (BMD) decoders with an erasure option. Such decoders have error/erasure tradeoff factor L=2, which means that an error is twice as expensive as an erasure in terms of the code's minimum distance. The Guruswami-Sudan (GS) list decoder can be considered as state of the art in algebraic decoding of RS codes. Besides an erasure option, it allows to adjust L to values in the range 1<L<=2. Based on previous work, we provide formulae which allow to optimally (in terms of residual codeword error probability) exploit the erasure option of decoders with arbitrary L, if the decoder can be used z>=1 times. We show that BMD decoders with z_BMD decoding trials can result in lower residual codeword error probability than GS decoders with z_GS trials, if z_BMD is only slightly larger than z_GS. This is of practical interest since BMD decoders generally have lower computational complexity than GS decoders.Comment: Accepted for the 2011 IEEE International Symposium on Information Theory, St. Petersburg, Russia, July 31 - August 05, 2011. 5 pages, 2 figure

    Advanced Coding Techniques with Applications to Storage Systems

    Get PDF
    This dissertation considers several coding techniques based on Reed-Solomon (RS) and low-density parity-check (LDPC) codes. These two prominent families of error-correcting codes have attracted a great amount of interest from both theorists and practitioners and have been applied in many communication scenarios. In particular, data storage systems have greatly benefited from these codes in improving the reliability of the storage media. The first part of this dissertation presents a unified framework based on rate-distortion (RD) theory to analyze and optimize multiple decoding trials of RS codes. Finding the best set of candidate decoding patterns is shown to be equivalent to a covering problem which can be solved asymptotically by RD theory. The proposed approach helps understand the asymptotic performance-versus-complexity trade-off of these multiple-attempt decoding algorithms and can be applied to a wide range of decoders and error models. In the second part, we consider spatially-coupled (SC) codes, or terminated LDPC convolutional codes, over intersymbol-interference (ISI) channels under joint iterative decoding. We empirically observe the phenomenon of threshold saturation whereby the belief-propagation (BP) threshold of the SC ensemble is improved to the maximum a posteriori (MAP) threshold of the underlying ensemble. More specifically, we derive a generalized extrinsic information transfer (GEXIT) curve for the joint decoder that naturally obeys the area theorem and estimate the MAP and BP thresholds. We also conjecture that SC codes due to threshold saturation can universally approach the symmetric information rate of ISI channels. In the third part, a similar analysis is used to analyze the MAP thresholds of LDPC codes for several multiuser systems, namely a noisy Slepian-Wolf problem and a multiple access channel with erasures. We provide rigorous analysis and derive upper bounds on the MAP thresholds which are shown to be tight in some cases. This analysis is a first step towards proving threshold saturation for these systems which would imply SC codes with joint BP decoding can universally approach the entire capacity region of the corresponding systems

    Optimal Power Allocation for a Successive Refinable Source with Multiple Descriptions over a Fading Relay Channel Using Broadcast/Multicast Strategies

    Get PDF
    In a wireless fading relay system with multicast/broadcast transmission, one of the most crucial challenges is the optimization of a transmission rate under multiuser channel diversity. Previously reported solutions for mitigating the vicious effect due to multi-user channel diversity have been mainly based on superposition coded multicast, where an optimal power allocation to each layer of modulated signals is determined. Many previous studies investigated a harmonic interplay between the successively re nable (SR) content source and a layered modulation via superposition coding (SPC) over the multicast/broadcast channels. By jointly considering the successive re nement characteristic at the source and the dependency of the layered modulation at the channel, a graceful fexibility can be achieved on a group of users with di erent channel realizations. Here most of the receivers are supposed to obtain the base quality layer information modulated in a lower rate, while the receivers with better channel realizations will obtain more information by re ning the base quality layer information using the enhancement quality layer information. In particular, the optimal power allocation for a SR source over a fading relay channel using broadcast/multicast strategy can be determined such that the minimum distortion of total received information is produced. However, a quality layer of data in a successively refined source may not be decodable if there is any loss of channel codewords, even if the corresponding longterm channel realization is su cient for decoding. To overcome this problem, one of the previous studies introduced a framework of coded video multicast, where multiple description coding (MDC) is applied to an SR content source and is further mapped into a layered modulation via SPC at the channel. Up till now, there has not been a rigorous proof provided on the bene t of manipulating the two coding techniques, (i.e. MDC and SPC), nor has any systematic optimization approach been developed for quantifying the parameter selection. Cooperative relaying in wireless networks has recently received much attention. Because the received signal can be severely degraded due to fading in wireless communications, time, frequency and spatial diversity techniques are introduced to overcome fading. Spatial diversity is typically envisioned as having multiple transmit and/or receive antennas. Cooperation can be used here to provide higher rates and results in a more robust system. Recently proposed cooperation schemes, which take into account the practical constraint that the relay cannot transmit and receive at the same time, include amplify-forward(AF), decode-forward(DF), and compress-forward(CF). In this study, in a fading relay scenario, a proposed framework is investigated to tackle the task of layered power allocation, where an in-depth study is conducted on achieving an optimal power allocation in SPC, such that the information distortion perceived at the users can be minimized. This thesis provides a comprehensive formulation on the information distortion at the receivers and a suite of solution approaches for the developed optimization problem by jointly considering MDC and SPC parameter selection over the fading relay channel

    Introduction to Forward-Error-Correcting Coding

    Get PDF
    This reference publication introduces forward error correcting (FEC) and stresses definitions and basic calculations for use by engineers. The seven chapters include 41 example problems, worked in detail to illustrate points. A glossary of terms is included, as well as an appendix on the Q function. Block and convolutional codes are covered

    Hybrid ARQ with parallel and serial concatenated convolutional codes for next generation wireless communications

    Get PDF
    This research focuses on evaluating the currently used FEC encoding-decoding schemes and improving the performance of error control systems by incorporating these schemes in a hybrid FEC-ARQ environment. Beginning with an overview of wireless communications and the various ARQ protocols, the thesis provides an in-depth explanation of convolutional encoding and Viterbi decoding, turbo (PCCC) and serial concatenated convolutional (SCCC) encoding with their respective MAP decoding strategies.;A type-II hybrid ARQ scheme with SCCCs is proposed for the first time and is a major contribution of this thesis. A vast improvement is seen in the BER performance of the successive individual FEC schemes discussed above. Also, very high throughputs can be achieved when these schemes are incorporated in an adaptive type-II hybrid ARQ system.;Finally, the thesis discusses the equivalence of the PCCCs and the SCCCs and proposes a technique to generate a hybrid code using both schemes
    corecore