82 research outputs found

    Efficient data compression from statistical physics of codes over finite fields

    Full text link
    In this paper we discuss a novel data compression technique for binary symmetric sources based on the cavity method over a Galois Field of order q (GF(q)). We present a scheme of low complexity and near optimal empirical performance. The compression step is based on a reduction of sparse low density parity check codes over GF(q) and is done through the so called reinforced belief-propagation equations. These reduced codes appear to have a non-trivial geometrical modification of the space of codewords which makes such compression computationally feasible. The computational complexity is O(d.n.q.log(q)) per iteration, where d is the average degree of the check nodes and n is the number of bits. For our code ensemble, decompression can be done in a time linear in the code's length by a simple leaf-removal algorithm.Comment: 10 pages, 4 figure

    Linear-time encoding and decoding of low-density parity-check codes

    Get PDF
    Low-density parity-check (LDPC) codes had a renaissance when they were rediscovered in the 1990’s. Since then LDPC codes have been an important part of the field of error-correcting codes, and have been shown to be able to approach the Shannon capacity, the limit at which we can reliably transmit information over noisy channels. Following this, many modern communications standards have adopted LDPC codes. Error-correction is equally important in protecting data from corruption on a hard-drive as it is in deep-space communications. It is most commonly used for example for reliable wireless transmission of data to mobile devices. For practical purposes, both encoding and decoding need to be of low complexity to achieve high throughput and low power consumption. This thesis provides a literature review of the current state-of-the-art in encoding and decoding of LDPC codes. Message- passing decoders are still capable of achieving the best error-correcting performance, while more recently considered bit-flipping decoders are providing a low-complexity alternative, albeit with some loss in error-correcting performance. An implementation of a low-complexity stochastic bit-flipping decoder is also presented. It is implemented for Graphics Processing Units (GPUs) in a parallel fashion, providing a peak throughput of 1.2 Gb/s, which is significantly higher than previous decoder implementations on GPUs. The error-correcting performance of a range of decoders has also been tested, showing that the stochastic bit-flipping decoder provides relatively good error-correcting performance with low complexity. Finally, a brief comparison of encoding complexities for two code ensembles is also presented

    Use of the LDPC codes Over the Binary Erasure Multiple Access Channel

    Get PDF
    Wireless communications use different orthogonal multiple access techniques to access a radio spectrum. The need for the bandwidth efficiency and data rate enhancing increase with the tremendous growth in the number of mobile users. One promising solution to increase the data rate without increasing the bandwidth is non-orthogonal multiple access channel. For the noiseless channel like the data network, the non-orthogonal multiple access channel is named: Binary Erasure Multiple Access Channel (BEMAC). To achieve two corner points on the boundary region of the BEMAC, a half rate code is needed. One practical code which has good performance over the BEMAC is the Low Density Parity Check (LDPC) codes. The LDPC codes receive a lot of attention nowadays, due to the good performance and low decoding complexity. However, there is a tradeoff between the performance and the decoding complexity of the LDPC codes. In addition, the LDPC encoding complexity is a problem, because an LDPC code is defined with its parity check matrix which is sparse and random and lacks of structure. This thesis consists of two main parts. In the first part, we propose a new practical method to construct an irregular half LDPC code which has low encoding complexity. The constructed code supposed to have a good performance and low encoding complexity. To have a low encoding complexity, the parity check matrix of the code must have lower triangular shape. By implementing the encoder and the decoder, the performance of the code can be also evaluated. Due to the short cycles in the code and finite length of the code the actual rate of the code is degraded. To improve the actual rate of the code, the guessing algorithm is applied if the Belief Propagation is stuck. The actual rate of the code increases from 0.418 to0.44. The decoding complexity is not considered when the code is constructed. Next in the second part, a regular LDPC code is constructed which has low decoding complexity. The code is generated based on the Gallager method. We present a new method to improve the performance of an existing regular LDPC code. The proposed method does not add a high complexity to the decoder. The method uses a combination of three algorithms: 1- Standard Belief Propagation 2- Generalized tree-expected propagation 3- Guessing algorithm. The guessing algorithm is impractical when the number of guesses increases. Because the number of possibilities increases exponentially with increasing the number of guesses. A new guessing algorithm is proposed in this thesis. The new guessing algorithm reduces the number of possibilities by guessing on the variable nodes which are connected to a set of check nodes. The actual rate of the code increases from 0.41 to 0.43 after applying the proposed method and considering the number of possibilities equal to two in the new guessing algorithm

    Fountain Codes under Maximum Likelihood Decoding

    Get PDF
    This dissertation focuses on fountain codes under maximum likelihood (ML) decoding. First LT codes are considered under a practical and widely used ML decoding algorithm known as inactivation decoding. Different analysis techniques are presented to characterize the decoding complexity. Next an upper bound to the probability of decoding failure of Raptor codes under ML decoding is provided. Then, the distance properties of an ensemble of fixed-rate Raptor codes with linear random outer codes are analyzed. Finally, a novel class of fountain codes is presented, which consists of a parallel concatenation of a block code with a linear random fountain code.Comment: PhD Thesi

    Near-capacity fixed-rate and rateless channel code constructions

    No full text
    Fixed-rate and rateless channel code constructions are designed for satisfying conflicting design tradeoffs, leading to codes that benefit from practical implementations, whilst offering a good bit error ratio (BER) and block error ratio (BLER) performance. More explicitly, two novel low-density parity-check code (LDPC) constructions are proposed; the first construction constitutes a family of quasi-cyclic protograph LDPC codes, which has a Vandermonde-like parity-check matrix (PCM). The second construction constitutes a specific class of protograph LDPC codes, which are termed as multilevel structured (MLS) LDPC codes. These codes possess a PCM construction that allows the coexistence of both pseudo-randomness as well as a structure requiring a reduced memory. More importantly, it is also demonstrated that these benefits accrue without any compromise in the attainable BER/BLER performance. We also present the novel concept of separating multiple users by means of user-specific channel codes, which is referred to as channel code division multiple access (CCDMA), and provide an example based on MLS LDPC codes. In particular, we circumvent the difficulty of having potentially high memory requirements, while ensuring that each user’s bits in the CCDMA system are equally protected. With regards to rateless channel coding, we propose a novel family of codes, which we refer to as reconfigurable rateless codes, that are capable of not only varying their code-rate but also to adaptively modify their encoding/decoding strategy according to the near-instantaneous channel conditions. We demonstrate that the proposed reconfigurable rateless codes are capable of shaping their own degree distribution according to the nearinstantaneous requirements imposed by the channel, but without any explicit channel knowledge at the transmitter. Additionally, a generalised transmit preprocessing aided closed-loop downlink multiple-input multiple-output (MIMO) system is presented, in which both the channel coding components as well as the linear transmit precoder exploit the knowledge of the channel state information (CSI). More explicitly, we embed a rateless code in a MIMO transmit preprocessing scheme, in order to attain near-capacity performance across a wide range of channel signal-to-ratios (SNRs), rather than only at a specific SNR. The performance of our scheme is further enhanced with the aid of a technique, referred to as pilot symbol assisted rateless (PSAR) coding, whereby a predetermined fraction of pilot bits is appropriately interspersed with the original information bits at the channel coding stage, instead of multiplexing pilots at the modulation stage, as in classic pilot symbol assisted modulation (PSAM). We subsequently demonstrate that the PSAR code-aided transmit preprocessing scheme succeeds in gleaning more information from the inserted pilots than the classic PSAM technique, because the pilot bits are not only useful for sounding the channel at the receiver but also beneficial for significantly reducing the computational complexity of the rateless channel decoder

    Modern Coding Theory: The Statistical Mechanics and Computer Science Point of View

    Full text link
    These are the notes for a set of lectures delivered by the two authors at the Les Houches Summer School on `Complex Systems' in July 2006. They provide an introduction to the basic concepts in modern (probabilistic) coding theory, highlighting connections with statistical mechanics. We also stress common concepts with other disciplines dealing with similar problems that can be generically referred to as `large graphical models'. While most of the lectures are devoted to the classical channel coding problem over simple memoryless channels, we present a discussion of more complex channel models. We conclude with an overview of the main open challenges in the field.Comment: Lectures at Les Houches Summer School on `Complex Systems', July 2006, 44 pages, 25 ps figure

    Deriving Good LDPC Convolutional Codes from LDPC Block Codes

    Full text link
    Low-density parity-check (LDPC) convolutional codes are capable of achieving excellent performance with low encoding and decoding complexity. In this paper we discuss several graph-cover-based methods for deriving families of time-invariant and time-varying LDPC convolutional codes from LDPC block codes and show how earlier proposed LDPC convolutional code constructions can be presented within this framework. Some of the constructed convolutional codes significantly outperform the underlying LDPC block codes. We investigate some possible reasons for this "convolutional gain," and we also discuss the --- mostly moderate --- decoder cost increase that is incurred by going from LDPC block to LDPC convolutional codes.Comment: Submitted to IEEE Transactions on Information Theory, April 2010; revised August 2010, revised November 2010 (essentially final version). (Besides many small changes, the first and second revised versions contain corrected entries in Tables I and II.
    • …
    corecore