55 research outputs found

    On performance analysis and implementation issues of iterative decoding for graph based codes

    Get PDF
    There is no doubt that long random-like code has the potential to achieve good performance because of its excellent distance spectrum. However, these codes remain useless in practical applications due to the lack of decoders rendering good performance at an acceptable complexity. The invention of turbo code marks a milestone progress in channel coding theory in that it achieves near Shannon limit performance by using an elegant iterative decoding algorithm. This great success stimulated intensive research oil long compound codes sharing the same decoding mechanism. Among these long codes are low-density parity-check (LDPC) code and product code, which render brilliant performance. In this work, iterative decoding algorithms for LDPC code and product code are studied in the context of belief propagation. A large part of this work concerns LDPC code. First the concept of iterative decoding capacity is established in the context of density evolution. Two simulation-based methods approximating decoding capacity are applied to LDPC code. Their effectiveness is evaluated. A suboptimal iterative decoder, Max-Log-MAP algorithm, is also investigated. It has been intensively studied in turbo code but seems to be neglected in LDPC code. The specific density evolution procedure for Max-Log-MAP decoding is developed. The performance of LDPC code with infinite block length is well-predicted using density evolution procedure. Two implementation issues on iterative decoding of LDPC code are studied. One is the design of a quantized decoder. The other is the influence of mismatched signal-to-noise ratio (SNR) level on decoding performance. The theoretical capacities of the quantized LDPC decoder, under Log-MAP and Max-Log-MAP algorithms, are derived through discretized density evolution. It is indicated that the key point in designing a quantized decoder is to pick a proper dynamic range. Quantization loss in terms of bit error rate (BER) performance could be kept remarkably low, provided that the dynamic range is chosen wisely. The decoding capacity under fixed SNR offset is obtained. The robustness of LDPC code with practical length is evaluated through simulations. It is found that the amount of SNR offset that can be tolerated depends on the code length. The remaining part of this dissertation deals with iterative decoding of product code. Two issues on iterative decoding of\u27 product code are investigated. One is, \u27improving BER performance by mitigating cycle effects. The other is, parallel decoding structure, which is conceptually better than serial decoding and yields lower decoding latency

    On the performance bound of turbo code.

    Get PDF
    by Ng Siu Wah.Thesis submitted in: August 1998.Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.Includes bibliographical references (leaves 49-[52]).Abstract also in Chinese.Chapter 1 --- Introduction and motivations --- p.1Chapter 1.1 --- Overview of Coding Technology --- p.2Chapter 1.2 --- Recent Breakthrough - Turbo Code --- p.3Chapter 1.3 --- Organization of the Thesis --- p.4Chapter 2 --- Basics of Turbo Codes --- p.5Chapter 2.1 --- A Brief Introduction of Turbo Codes --- p.6Chapter 2.1.1 --- Constituent Encoders with interleaver --- p.6Chapter 2.1.2 --- Iterative Decoder --- p.8Chapter 2.2 --- Additional remarks on Turbo Codes --- p.12Chapter 2.2.1 --- RSC encoders --- p.12Chapter 2.2.2 --- Interleaver --- p.14Chapter 2.3 --- Performance Evaluation of Turbo Codes --- p.15Chapter 2.3.1 --- Union Bound --- p.15Chapter 2.3.2 --- Weight Enumerating Function --- p.16Chapter 2.3.3 --- Uniform Interleaver --- p.17Chapter 3 --- An Improved Performance Bound for Turbo Codes --- p.20Chapter 3.1 --- Motivations --- p.21Chapter 3.2 --- Duman-Salehi's bound for Turbo Code --- p.22Chapter 3.2.1 --- Notations and definitions --- p.22Chapter 3.2.2 --- Word Error Probability --- p.23Chapter 3.3 --- Improved bounds for Turbo Code --- p.26Chapter 3.3.1 --- Preliminaries --- p.26Chapter 3.3.2 --- Generalization of Duman-Salehi's Bounds --- p.28Chapter 3.3.3 --- An Improved Bound on Word Error Probability --- p.30Chapter 3.3.4 --- An Improved Bound on Bit Error Probability --- p.34Chapter 3.4 --- Results and Discussions --- p.37Chapter 3.4.1 --- Assumptions --- p.37Chapter 3.4.2 --- Numerical results --- p.37Chapter 3.4.3 --- Distance spectra --- p.40Chapter 4 --- Concluding Remarks --- p.48Bibliography --- p.4

    A STUDY OF LINEAR ERROR CORRECTING CODES

    Get PDF
    Since Shannon's ground-breaking work in 1948, there have been two main development streams of channel coding in approaching the limit of communication channels, namely classical coding theory which aims at designing codes with large minimum Hamming distance and probabilistic coding which places the emphasis on low complexity probabilistic decoding using long codes built from simple constituent codes. This work presents some further investigations in these two channel coding development streams. Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents and Mattson-Solomon polynomials, and are complementary to each other. The two methods generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and projective geometry codes. Their extension to non binary fields is shown to be straightforward. These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative decoding. It is also shown that for some of these codes, maximum likelihood performance may be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords of the dual code for each iteration. Following a property of the revolving-door combination generator, multi-threaded minimum Hamming distance computation algorithms are developed. Using these algorithms, the previously unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated. In addition, the highest minimum Hamming distance attainable by all binary cyclic codes of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes which have higher minimum Hamming distance than the previously considered best known linear code have been found. It is shown that by exploiting the structure of circulant matrices, the number of codewords required, to compute the minimum Hamming distance and the number of codewords of a given Hamming weight of binary double-circulant codes based on primes, may be reduced. A means of independently verifying the exhaustively computed number of codewords of a given Hamming weight of these double-circulant codes is developed and in coiyunction with this, it is proved that some published results are incorrect and the correct weight spectra are presented. Moreover, it is shown that it is possible to estimate the minimum Hamming distance of this family of prime-based double-circulant codes. It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection mechanism that offers much better throughput and performance than the conventional ORG scheme is described. Using the same method it is shown that the performance of conventional CRC scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy communications system and it is shown that sequences of good error correction codes, suitable for use in incremental redundancy communications systems may be obtained using the Constructions X and XX. Examples are given and their performances presented in comparison to conventional CRC schemes

    CONVERGENCE IMPROVEMENT OF ITERATIVE DECODERS

    Get PDF
    Iterative decoding techniques shaked the waters of the error correction and communications field in general. Their amazing compromise between complexity and performance offered much more freedom in code design and made highly complex codes, that were being considered undecodable until recently, part of almost any communication system. Nevertheless, iterative decoding is a sub-optimum decoding method and as such, it has attracted huge research interest. But the iterative decoder still hides many of its secrets, as it has not been possible yet to fully describe its behaviour and its cost function. This work presents the convergence problem of iterative decoding from various angles and explores methods for reducing any sub-optimalities on its operation. The decoding algorithms for both LDPC and turbo codes were investigated and aspects that contribute to convergence problems were identified. A new algorithm was proposed, capable of providing considerable coding gain in any iterative scheme. Moreover, it was shown that for some codes the proposed algorithm is sufficient to eliminate any sub-optimality and perform maximum likelihood decoding. Its performance and efficiency was compared to that of other convergence improvement schemes. Various conditions that can be considered critical to the outcome of the iterative decoder were also investigated and the decoding algorithm of LDPC codes was followed analytically to verify the experimental results

    Communications in the observation limited regime

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 141-145).We consider the design of communications systems when the principal cost is observing the channel, as opposed to transmit energy per bit or spectral efficiency. This is motivated by energy constrained communications devices where sampling the signal, rather than transmitting or processing it, dominates energy consumption. We show that sequentially observing samples with the maximum a posteriori entropy can reduce observation costs by close to an order of magnitude using a (24,12) Golay code. This is the highest performance reported over the binary input AWGN channel, with or without feedback, for this blocklength. Sampling signal energy, rather than amplitude, lowers circuit complexity and power dissipation significantly, but makes synchronization harder. We show that while the distance function of this non-linear coding problem is intractable in general, it is Euclidean at vanishing SNRs, and root Euclidean at large SNRs. We present sequences that maximize the error exponent at low SNRs under the peak power constraint, and under all SNRs under an average power constraint. Some of our new sequences are an order of magnitude shorter than those used by the 802.15.4a standard.(cont.) In joint work with P. Mercier and D. Daly, we demonstrate the first energy sampling wireless modem capable of synchronizing to within a ns, while sampling energy at only 32 Msamples per second, and using no high speed clocks. We show that traditional, minimum distance classifiers may be highly sensitive to parameter estimation errors, and propose robust, computationally efficient alternatives. We challenge the prevailing notion that energy samplers must accurately shift phase to synchronize with high precision.by Manish Bhardwaj.Ph.D

    Spread spectrum-based video watermarking algorithms for copyright protection

    Get PDF
    Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can now benefit from hardware and software which was considered state-of-the-art several years ago. The advantages offered by the digital technologies are major but the same digital technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly possible and relatively easy, in spite of various forms of protection, but due to the analogue environment, the subsequent copies had an inherent loss in quality. This was a natural way of limiting the multiple copying of a video material. With digital technology, this barrier disappears, being possible to make as many copies as desired, without any loss in quality whatsoever. Digital watermarking is one of the best available tools for fighting this threat. The aim of the present work was to develop a digital watermarking system compliant with the recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark can be inserted in either spatial domain or transform domain, this aspect was investigated and led to the conclusion that wavelet transform is one of the best solutions available. Since watermarking is not an easy task, especially considering the robustness under various attacks several techniques were employed in order to increase the capacity/robustness of the system: spread-spectrum and modulation techniques to cast the watermark, powerful error correction to protect the mark, human visual models to insert a robust mark and to ensure its invisibility. The combination of these methods led to a major improvement, but yet the system wasn't robust to several important geometrical attacks. In order to achieve this last milestone, the system uses two distinct watermarks: a spatial domain reference watermark and the main watermark embedded in the wavelet domain. By using this reference watermark and techniques specific to image registration, the system is able to determine the parameters of the attack and revert it. Once the attack was reverted, the main watermark is recovered. The final result is a high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen

    A STUDY OF ERASURE CORRECTING CODES

    Get PDF
    This work focus on erasure codes, particularly those that of high performance, and the related decoding algorithms, especially with low computational complexity. The work is composed of different pieces, but the main components are developed within the following two main themes. Ideas of message passing are applied to solve the erasures after the transmission. Efficient matrix-representation of the belief propagation (BP) decoding algorithm on the BEG is introduced as the recovery algorithm. Gallager's bit-flipping algorithm are further developed into the guess and multi-guess algorithms especially for the application to recover the unsolved erasures after the recovery algorithm. A novel maximum-likelihood decoding algorithm, the In-place algorithm, is proposed with a reduced computational complexity. A further study on the marginal number of correctable erasures by the In-place algoritinn determines a lower bound of the average number of correctable erasures. Following the spirit in search of the most likable codeword based on the received vector, we propose a new branch-evaluation- search-on-the-code-tree (BESOT) algorithm, which is powerful enough to approach the ML performance for all linear block codes. To maximise the recovery capability of the In-place algorithm in network transmissions, we propose the product packetisation structure to reconcile the computational complexity of the In-place algorithm. Combined with the proposed product packetisation structure, the computational complexity is less than the quadratic complexity bound. We then extend this to application of the Rayleigh fading channel to solve the errors and erasures. By concatenating an outer code, such as BCH codes, the product-packetised RS codes have the performance of the hard-decision In-place algorithm significantly better than that of the soft-decision iterative algorithms on optimally designed LDPC codes

    A study of major coding techniques for digital communication Final report

    Get PDF
    Coding techniques for digital communication channel

    Compound codes based on irregular graphs and their iterative decoding.

    Get PDF
    Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2004.Low-density parity-check (LDPC) codes form a Shannon limit approaching class of linear block codes. With iterative decoding based on their Tanner graphs, they can achieve outstanding performance. Since their rediscovery in late 1990's, the design, construction, and decoding of LDPC codes as well as their generalization have become one of the focal research points. This thesis takes a few more steps in these directions. The first significant contribution of this thesis is the introduction of a new class of codes called Generalized Irregular Low-Density (GILD) parity-check codes, which are adapted from the previously known class of Generalized Low-Density (GLD) codes. GILD codes are generalization of irregular LDPC codes, and are shown to outperform GLD codes. In addition, GILD codes have a significant advantage over GLD codes in terms of encoding and decoding complexity. They are also able to match and even beat LDPC codes for small block lengths. The second significant contribution of this thesis is the proposition of several decoding algorithms. Two new decoding algolithms for LDPC codes are introduced. In principle and complexity these algorithms can be grouped with bit flipping algorithms. Two soft-input soft-output (SISO) decoding algorithms for linear block codes are also proposed. The first algorithm is based on Maximum a Posteriori Probability (MAP) decoding of low-weight subtrellis centered around a generated candidate codeword. The second algorithm modifies and utilizes the improved Kaneko's decoding algorithm for soft-input hard-output decoding. These hard outputs are converted to soft-decisions using reliability calculations. Simulation results indicate that the proposed algorithms provide a significant improvement in error performance over Chase-based algorithm and achieve practically optimal performance with a significant reduction in decoding complexity. An analytical expression for the union bound on the bit error probability of linear codes on the Gilbert-Elliott (GE) channel model is also derived. This analytical result is shown to be accurate in establishing the decoder performance in the range where obtaining sufficient data from simulation is impractical

    Design of serially-concatenated LDGM codes

    Get PDF
    [Resumen] Since Shannon demonstrated in 1948 the feasibility of achieving an arbitrarily low error probability in a communications system provided that the transmission rate was kept below a certain limit, one of the greatest challenges in the realm of digital communications and, more specifically, in the channel coding field, has been finding codes that are able to approach this limit as much as possible with a reasonable encoding and decoding complexity, However, it was not until 1993, when Berrou et al. presented the turbo codes, that a coding scheme capable of performing at less than 1dB from Shannon's limit with an extremely low error probability was found. The idea on which these codes are based is the iterative decoding of concatenated components that exchange information about the transmitted bits, which is known as the "turbo principle". The generalization of this idea led in 1995 to the rediscovery of LDPC (Low Density Parity Check) codes, proposed for the first time by Gallager in the 60s. LDPC codes are linear block codes with a sparse parity check matrix that are able to surpass the performance of turbo codes with a smaller decoding complexity. However, due to the fact that the generator matrix of general LDPC codes is not sparse, their encoding complexity can be excessively high. LDGM (Low Density Generator Matrix) codes, a particular case of LDPC codes, are codes with a sparse generator matrix, thanks to which they present a lower encoding complexity. However, except for the case of very high rate codes, LDGM codes are "bad", i.e., they have a non-zero error probability that is independent of the code block length. More recently, IRA (Irregular Repeat-Accumulated) codes, consisting of the serial concatenation of a LDGM code and an accumulator, have been proposed. IRA codes are able to get close to the performance of LDPC codes with an encoding complexity similar to that of LDGM codes. In this thesis we explore an alternative to IRA codes consisting in the serial concatenation of two LDGM codes, a scheme that we will denote SCLDGM (Serially-Concatenated Low-Density Generator Matrix). The basic premise of SCLDGM codes is that an inner code of rate close to the desired transmission rate fixes most of the errors, and an external code of rate close to one corrects the few errors that result from decoding the inner code. For any of these schemes to perform as close as possible to the capacity limit it is necessary to determine the code parameters that best fit the channel over which the transmission will be done. The two techniques most commonly used in the literature to optimize LDPC codes are Density Evolution (DE) and EXtrinsic Information Transfer (EXIT) charts, which have been employed to obtain optimized codes that perform at a few tenths of a decibel of the AWGN channel capacity. However, no optimization techniques have been presented for SCLDGM codes, which so far have been designed heuristically and therefore their performance is far from the performance achieved by IRA and LDPC codes. Other of the most important advances that have occurred in recent years is the utilization of multiple antennas at the trasmitter and the receiver, which is known as MIMO (Multiple-Input Multiple-Output) systems. Telatar showed that the channel capacity in these kind of systems scales linearly with the minimum number of transmit and receive antennas, which enables us to achieve spectral efficiencies far greater than with systems with a single transmit and receive antenna (or Single Input Single Output (SISO) systems). This important advantage has attracted a lot of attention from the research community, and has caused that many of the new standards, such as WiMax 802.16e or WiFi 802.11n, as well as future 4G systems are based on MIMO systems. The main problem of MIMO systems is the high complexity of optimum detection, which grows exponentially with the number of transmit antennas and the number of modulation levels. Several suboptimum algorithms have been proposed to reduce this complexity, most notably the SIC-MMSE (Soft-Interference Cancellation Minimum Mean Square Error) and spherical detectors. Another major issue is the high complexity of the channel estimation, due to the large number of coefficients which determine it. There are techniques, such as Maximum-Likelihood-Expectation-Maximization (ML-EM), that have been successfully applied to estimate MIMO channels but, as in the case of detection, they suffer from the problem of a very high complexity when the number of transmit antennas or the size of the constellation increase. The main objective of this work is the study and optimization of SCLDGM codes in SISO and MIMO channels. To this end, we propose an optimization method for SCLDGM codes based on EXIT charts that allow these codes to exceed the performance of IRA codes existing in the literature and get close to the performance of LDPC codes, with the advantage over the latter of a lower encoding complexity. We also propose optimized SCLDGM codes for both spherical and SIC-MMSE suboptimal MIMO detectors, constituting a system that is capable of approaching the capacity limits of MIMO channels with a low complexity encoding, detection and decoding. We analyze the BICM (Bit-Interleaved Coded Modulation) scheme and the concatenation of SCLDGM codes with Space-Time Codes (STC) in ergodic and quasi-static MIMO channels. Furthermore, we explore the combination of these codes with different channel estimation algorithms that will take advantage of the low complexity of the suboptimum detectors to reduce the complexity of the estimation process while keeping a low distance to the capacity limit. Finally, we propose coding schemes for low rates involving the serial concatenation of several LDGM codes, reducing the complexity of recently proposed schemes based on Hadamard codes
    corecore