6 research outputs found

    Advanced channel coding techniques using bit-level soft information

    Get PDF
    In this dissertation, advanced channel decoding techniques based on bit-level soft information are studied. Two main approaches are proposed: bit-level probabilistic iterative decoding and bit-level algebraic soft-decision (list) decoding (ASD). In the first part of the dissertation, we first study iterative decoding for high density parity check (HDPC) codes. An iterative decoding algorithm, which uses the sum product algorithm (SPA) in conjunction with a binary parity check matrix adapted in each decoding iteration according to the bit-level reliabilities is proposed. In contrast to the common belief that iterative decoding is not suitable for HDPC codes, this bit-level reliability based adaptation procedure is critical to the conver-gence behavior of iterative decoding for HDPC codes and it significantly improves the iterative decoding performance of Reed-Solomon (RS) codes, whose parity check matrices are in general not sparse. We also present another iterative decoding scheme for cyclic codes by randomly shifting the bit-level reliability values in each iteration. The random shift based adaptation can also prevent iterative decoding from getting stuck with a significant complexity reduction compared with the reliability based parity check matrix adaptation and still provides reasonable good performance for short-length cyclic codes. In the second part of the dissertation, we investigate ASD for RS codes using bit-level soft information. In particular, we show that by carefully incorporating bit¬level soft information in the multiplicity assignment and the interpolation step, ASD can significantly outperform conventional hard decision decoding (HDD) for RS codes with a very small amount of complexity, even though the kernel of ASD is operating at the symbol-level. More importantly, the performance of the proposed bit-level ASD can be tightly upper bounded for practical high rate RS codes, which is in general not possible for other popular ASD schemes. Bit-level soft-decision decoding (SDD) serves as an efficient way to exploit the potential gain of many classical codes, and also facilitates the corresponding per-formance analysis. The proposed bit-level SDD schemes are potential and feasible alternatives to conventional symbol-level HDD schemes in many communication sys-tems

    The hybrid list decoding and Chase-like algorithm of Reed-Solomon codes.

    Get PDF
    Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005Reed-Solomon (RS) codes are powerful error-correcting codes that can be found in a wide variety of digital communications and digital data-storage systems. Classical hard decoder of RS code can correct t = (dmin -1) /2 errors where dmin = (n - k+ 1) is the minimum distance of the codeword, n is the length of codeword and k is the dimension of codeword. Maximum likelihood decoding (MLD) performs better than the classical decoding and therefore how to approach the performance of the MLD with less complexity is a subject which has been researched extensively. Applying the bit reliability obtained from channel to the conventional decoding algorithm is always an efficient technique to approach the performance of MLD, although the exponential increase of complexity is always concomitant. It is definite that more enhancement of performance can be achieved if we apply the bit reliability to enhanced algebraic decoding algorithm that is more powerful than conventional decoding algorithm. In 1997 Madhu Sudan, building on previous work of Welch-Berlekamp, and others, discovered a polynomial-time algorithm for decoding low-rate Reed- Solomon codes beyond the classical error-correcting bound t = (dmin -1) /2. Two years later Guruswami and Sudan published a significantly improved version of Sudan's algorithm (GS), but these papers did not focus on devising practical implementation. The other authors, Kotter, Roth and Ruckenstein, were able to find realizations for the key steps in the GS algorithm, thus making the GS algorithm a practical instrument in transmission systems. The Gross list algorithm, which is a simplified one with less decoding complexity realized by a reencoding scheme, is also taken into account in this dissertation. The fundamental idea of the GS algorithm is to take advantage of an interpolation step to get an interpolation polynomial produced by support symbols, received symbols and their corresponding multiplicities. After that the GS algorithm implements a factorization step to find the roots of the interpolation polynomial. After comparing the reliability of these codewords which are from the output of factorization, the GS algorithm outputs the most likely one. The support set, received set and multiplicity set are created by Koetter Vardy (KV) front end algorithm. In the GS list decoding algorithm, the number of errors that can be corrected increases to tcs = n - 1 - lJ (k - 1) n J. It is easy to show that the GS list decoding algorithm is capable of correcting more errors than a conventional decoding algorithm. In this dissertation, we present two hybrid list decoding and Chase-like algorithms. We apply the Chase algorithms to the KV soft-decision front end. Consequently, we are able to provide a more reliable input to the KV list algorithm. In the application of Chase-like algorithm, we take two conditions into consideration, so that the floor cannot occur and more coding gains are possible. With an increase of the bits that are chosen by the Chase algorithm, the complexity of the hybrid algorithm increases exponentially. To solve this problem an adaptive algorithm is applied to the hybrid algorithm based on the fact that as signal-to-noise ratio (SNR) increases the received bits are more reliable, and not every received sequence needs to create the fixed number of test error patterns by the Chase algorithm. We set a threshold according to the given SNR and utilize it to finally decide which unreliable bits are picked up by Chase algorithm. However, the performance of the adaptive hybrid algorithm at high SNRs decreases as the complexity decreases. It means that the adaptive algorithm is not a sufficient mechanism for eliminating the redundant test error patterns. The performance of the adaptive hybrid algorithm at high SNRs motivates us to find out another way to reduce the complexity without loss of performance. We would consider the two following problems before dealing with the problem on hand. One problem is: can we find a terminative condition to decide which generated candidate codeword is the most likely codeword for received sequence before all candidates of received set are tested? Another one is: can we eliminate the test error patterns that cannot create more likely codewords than the generated codewords? In our final algorithm, an optimality lemma coming from the Kaneko algorithm is applied to solve the first problem and the second problem is solved by a ruling out scheme for the reduced list decoding algorithm. The Gross list algorithm is also applied in our final hybrid algorithm. After the two problems have been solved, the final hybrid algorithm has performance comparable with the hybrid algorithm combined the KV list decoding algorithm and the Chase algorithm but much less complexity at high SNRs

    Coded cooperative diversity with low complexity encoding and decoding algorithms.

    Get PDF
    One of the main concerns in designing the wireless communication systems is to provide sufficiently large data rates while considering the different aspects of the implementation complexity that is often constrained by limited battery power and signal processing capability of the devices. Thus, in this thesis, a low complexity encoding and decoding algorithms are investigated for systems with the transmission diversity, particularly the receiver diversity and the cooperative diversity. Design guidelines for such systems are provided to provide a good trade-off between the implementation complexity and the performance. The order statistics based list decoding techniques for linear binary block codes of small to medium block length are investigated to reduce the complexity of coded systems. The original order statistics decoding (OSD) is generalized by assuming segmentation of the most reliable independent positions of the received bits. The segmentation is shown to overcome several drawbacks of the original order statistics decoding. The complexity of the OSD is further reduced by assuming a partial ordering of the received bits in order to avoid the highly complex Gauss elimination. The bit error rate performance and the decoding complexity trade-off of the proposed decoding algorithms are studied by computer simulations. Numerical examples show that, in some cases, the proposed decoding schemes are superior to the original order statistics decoding in terms of both the bit error rate performance as well as the decoding complexity. The complexity of the order statistics based list decoding algorithms for linear block codes and binary block turbo codes (BTC) is further reduced by employing highly reliable cyclic redundancy check (CRC) bits. The results show that sending CRC bits for many segments is the most effective tecnhique in reducing the complexity. The coded cooperative diversity is compared with the conventional receiver coded diversity in terms of the pairwise error probability and the overall bit error rate (BER). The expressions for the pairwise error probabilities are obtained analytically and verified by computer simulations. The performance of the cooperative diversity is found to be strongly relay location dependent. Using the analytical as well as extensive numerical results, the geographical areas of the relay locations are obtained for small to medium signal-to-noise ratio values, such that the cooperative coded diversity outperforms the receiver coded diversity. However, for sufficiently large signal-to-noise ratio (SNR) values, or if the path-loss attenuations are not considered, then the receiver coded diversity always outperforms the cooperative coded diversity. The obtained results have important implications on the deployment of the next generation cellular systems supporting the cooperative as well as the receiver diversity

    CONVERGENCE IMPROVEMENT OF ITERATIVE DECODERS

    Get PDF
    Iterative decoding techniques shaked the waters of the error correction and communications field in general. Their amazing compromise between complexity and performance offered much more freedom in code design and made highly complex codes, that were being considered undecodable until recently, part of almost any communication system. Nevertheless, iterative decoding is a sub-optimum decoding method and as such, it has attracted huge research interest. But the iterative decoder still hides many of its secrets, as it has not been possible yet to fully describe its behaviour and its cost function. This work presents the convergence problem of iterative decoding from various angles and explores methods for reducing any sub-optimalities on its operation. The decoding algorithms for both LDPC and turbo codes were investigated and aspects that contribute to convergence problems were identified. A new algorithm was proposed, capable of providing considerable coding gain in any iterative scheme. Moreover, it was shown that for some codes the proposed algorithm is sufficient to eliminate any sub-optimality and perform maximum likelihood decoding. Its performance and efficiency was compared to that of other convergence improvement schemes. Various conditions that can be considered critical to the outcome of the iterative decoder were also investigated and the decoding algorithm of LDPC codes was followed analytically to verify the experimental results

    Compound codes based on irregular graphs and their iterative decoding.

    Get PDF
    Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2004.Low-density parity-check (LDPC) codes form a Shannon limit approaching class of linear block codes. With iterative decoding based on their Tanner graphs, they can achieve outstanding performance. Since their rediscovery in late 1990's, the design, construction, and decoding of LDPC codes as well as their generalization have become one of the focal research points. This thesis takes a few more steps in these directions. The first significant contribution of this thesis is the introduction of a new class of codes called Generalized Irregular Low-Density (GILD) parity-check codes, which are adapted from the previously known class of Generalized Low-Density (GLD) codes. GILD codes are generalization of irregular LDPC codes, and are shown to outperform GLD codes. In addition, GILD codes have a significant advantage over GLD codes in terms of encoding and decoding complexity. They are also able to match and even beat LDPC codes for small block lengths. The second significant contribution of this thesis is the proposition of several decoding algorithms. Two new decoding algolithms for LDPC codes are introduced. In principle and complexity these algorithms can be grouped with bit flipping algorithms. Two soft-input soft-output (SISO) decoding algorithms for linear block codes are also proposed. The first algorithm is based on Maximum a Posteriori Probability (MAP) decoding of low-weight subtrellis centered around a generated candidate codeword. The second algorithm modifies and utilizes the improved Kaneko's decoding algorithm for soft-input hard-output decoding. These hard outputs are converted to soft-decisions using reliability calculations. Simulation results indicate that the proposed algorithms provide a significant improvement in error performance over Chase-based algorithm and achieve practically optimal performance with a significant reduction in decoding complexity. An analytical expression for the union bound on the bit error probability of linear codes on the Gilbert-Elliott (GE) channel model is also derived. This analytical result is shown to be accurate in establishing the decoder performance in the range where obtaining sufficient data from simulation is impractical
    corecore