77 research outputs found

    Lemma for Linear Feedback Shift Registers and DFTs Applied to Affine Variety Codes

    Full text link
    In this paper, we establish a lemma in algebraic coding theory that frequently appears in the encoding and decoding of, e.g., Reed-Solomon codes, algebraic geometry codes, and affine variety codes. Our lemma corresponds to the non-systematic encoding of affine variety codes, and can be stated by giving a canonical linear map as the composition of an extension through linear feedback shift registers from a Grobner basis and a generalized inverse discrete Fourier transform. We clarify that our lemma yields the error-value estimation in the fast erasure-and-error decoding of a class of dual affine variety codes. Moreover, we show that systematic encoding corresponds to a special case of erasure-only decoding. The lemma enables us to reduce the computational complexity of error-evaluation from O(n^3) using Gaussian elimination to O(qn^2) with some mild conditions on n and q, where n is the code length and q is the finite-field size.Comment: 37 pages, 1 column, 10 figures, 2 tables, resubmitted to IEEE Transactions on Information Theory on Jan. 8, 201

    Fast Erasure-and-Error Decoding and Systematic Encoding of a Class of Affine Variety Codes

    Full text link
    In this paper, a lemma in algebraic coding theory is established, which is frequently appeared in the encoding and decoding for algebraic codes such as Reed-Solomon codes and algebraic geometry codes. This lemma states that two vector spaces, one corresponds to information symbols and the other is indexed by the support of Grobner basis, are canonically isomorphic, and moreover, the isomorphism is given by the extension through linear feedback shift registers from Grobner basis and discrete Fourier transforms. Next, the lemma is applied to fast unified system of encoding and decoding erasures and errors in a certain class of affine variety codes.Comment: 6 pages, 2 columns, presented at The 34th Symposium on Information Theory and Its Applications (SITA2011

    Applications of Derandomization Theory in Coding

    Get PDF
    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi

    Structural Design and Analysis of Low-Density Parity-Check Codes and Systematic Repeat-Accumulate Codes

    Get PDF
    The discovery of two fundamental error-correcting code families, known as turbo codes and low-density parity-check (LDPC) codes, has led to a revolution in coding theory and to a paradigm shift from traditional algebraic codes towards modern graph-based codes that can be decoded by iterative message passing algorithms. From then on, it has become a focal point of research to develop powerful LDPC and turbo-like codes. Besides the classical domain of randomly constructed codes, an alternative and competitive line of research is concerned with highly structured LDPC and turbo-like codes based on combinatorial designs. Such codes are typically characterized by high code rates already at small to moderate code lengths and good code properties such as the avoidance of harmful 4-cycles in the code's factor graph. Furthermore, their structure can usually be exploited for an efficient implementation, in particular, they can be encoded with low complexity as opposed to random-like codes. Hence, these codes are suitable for high-speed applications such as magnetic recording or optical communication. This thesis greatly contributes to the field of structured LDPC codes and systematic repeat-accumulate (sRA) codes as a subclass of turbo-like codes by presenting new combinatorial construction techniques and algebraic methods for an improved code design. More specifically, novel and infinite families of high-rate structured LDPC codes and sRA codes are presented based on balanced incomplete block designs (BIBDs), which form a subclass of combinatorial designs. Besides of showing excellent error-correcting capabilites under iterative decoding, these codes can be implemented efficiently, since their inner structure enables low-complexity encoding and accelerated decoding algorithms. A further infinite series of structured LDPC codes is presented based on the notion of transversal designs, which form another subclass of combinatorial designs. By a proper configuration of these codes, they reveal an excellent decoding performance under iterative decoding, in particular, with very low error-floors. The approach for lowering these error-floors is threefold. First, a thorough analysis of the decoding failures is carried out, resulting in an extensive classification of so-called stopping sets and absorbing sets. These combinatorial entities are known to be the main cause of decoding failures in the error-floor region over the binary erasure channel (BEC) and additive white Gaussian noise (AWGN) channel, respectively. Second, the specific code structures are exploited in order to calculate conditions for the avoidance of the most harmful stopping and absorbing sets. Third, powerful design strategies are derived for the identification of those code instances with the best error-floor performances. The resulting codes can additionally be encoded with low complexity and thus are ideally suited for practical high-speed applications. Further investigations are carried out on the infinite family of structured LDPC codes based on finite geometries. It is known that these codes perform very well under iterative decoding and that their encoding can be achieved with low complexity. By combining the latest findings in the fields of finite geometries and combinatorial designs, we generate new theoretical insights about the decoding failures of such codes under iterative decoding. These examinations finally help to identify the geometric codes with the most beneficial error-correcting capabilities over the BEC

    Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications

    Get PDF
    Coding; Communications; Engineering; Networks; Information Theory; Algorithm
    • …
    corecore