753 research outputs found

    Coding Theory and Algebraic Combinatorics

    Full text link
    This chapter introduces and elaborates on the fruitful interplay of coding theory and algebraic combinatorics, with most of the focus on the interaction of codes with combinatorial designs, finite geometries, simple groups, sphere packings, kissing numbers, lattices, and association schemes. In particular, special interest is devoted to the relationship between codes and combinatorial designs. We describe and recapitulate important results in the development of the state of the art. In addition, we give illustrative examples and constructions, and highlight recent advances. Finally, we provide a collection of significant open problems and challenges concerning future research.Comment: 33 pages; handbook chapter, to appear in: "Selected Topics in Information and Coding Theory", ed. by I. Woungang et al., World Scientific, Singapore, 201

    Self-Dual Codes

    Get PDF
    Self-dual codes are important because many of the best codes known are of this type and they have a rich mathematical theory. Topics covered in this survey include codes over F_2, F_3, F_4, F_q, Z_4, Z_m, shadow codes, weight enumerators, Gleason-Pierce theorem, invariant theory, Gleason theorems, bounds, mass formulae, enumeration, extremal codes, open problems. There is a comprehensive bibliography.Comment: 136 page

    A STUDY OF LINEAR ERROR CORRECTING CODES

    Get PDF
    Since Shannon's ground-breaking work in 1948, there have been two main development streams of channel coding in approaching the limit of communication channels, namely classical coding theory which aims at designing codes with large minimum Hamming distance and probabilistic coding which places the emphasis on low complexity probabilistic decoding using long codes built from simple constituent codes. This work presents some further investigations in these two channel coding development streams. Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents and Mattson-Solomon polynomials, and are complementary to each other. The two methods generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and projective geometry codes. Their extension to non binary fields is shown to be straightforward. These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative decoding. It is also shown that for some of these codes, maximum likelihood performance may be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords of the dual code for each iteration. Following a property of the revolving-door combination generator, multi-threaded minimum Hamming distance computation algorithms are developed. Using these algorithms, the previously unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated. In addition, the highest minimum Hamming distance attainable by all binary cyclic codes of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes which have higher minimum Hamming distance than the previously considered best known linear code have been found. It is shown that by exploiting the structure of circulant matrices, the number of codewords required, to compute the minimum Hamming distance and the number of codewords of a given Hamming weight of binary double-circulant codes based on primes, may be reduced. A means of independently verifying the exhaustively computed number of codewords of a given Hamming weight of these double-circulant codes is developed and in coiyunction with this, it is proved that some published results are incorrect and the correct weight spectra are presented. Moreover, it is shown that it is possible to estimate the minimum Hamming distance of this family of prime-based double-circulant codes. It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection mechanism that offers much better throughput and performance than the conventional ORG scheme is described. Using the same method it is shown that the performance of conventional CRC scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy communications system and it is shown that sequences of good error correction codes, suitable for use in incremental redundancy communications systems may be obtained using the Constructions X and XX. Examples are given and their performances presented in comparison to conventional CRC schemes

    On Decoding of Quadratic Residue Codes

    Get PDF
    A binary Quadratic Residue(QR) code of length n is an (n, (n+1)/2) cyclic code over GF(2m) with generator polynomial g(x) where m is some integer. The length of this code is a prime number of the form n = 8l + 1 where l is some integer. The generator polynomial g(x) is defined by g(x)=āˆ_(iāˆˆQ_n) (x-Ī²i ) where Ī² is a primitive nth root of unity in the finite field GF(2m) with m being the smallest positive integer such that n|2m-1 and Qn is the collection of all nonzero quadratic residues modulo n given by Qn={iā”‚iā‰”j2 mod n for 1ā‰¤jā‰¤n-1}. Algebraic approaches to the decoding of the quadratic residue (QR) codes were studied in [2], [3], [4], [5], [6] and [13]. Here, in this thesis, some new more general properties are found for the syndromes of the subclass of binary QR codes of length n = 8m + 1 or n = 8m - 1. A new algebraic decoding algorithm for the (41, 21, 9) binary QR code is presented by having the unknown syndrome S3 which is a necessary condition for decoding the (41, 21, 9) QR code

    Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications

    Get PDF
    Coding; Communications; Engineering; Networks; Information Theory; Algorithm

    Design of tch-type sequences for communications

    Get PDF
    This thesis deals with the design of a class of cyclic codes inspired by TCH codewords. Since TCH codes are linked to finite fields the fundamental concepts and facts about abstract algebra, namely group theory and number theory, constitute the first part of the thesis. By exploring group geometric properties and identifying an equivalence between some operations on codes and the symmetries of the dihedral group we were able to simplify the generation of codewords thus saving on the necessary number of computations. Moreover, we also presented an algebraic method to obtain binary generalized TCH codewords of length N = 2k, k = 1,2, . . . , 16. By exploring Zech logarithmā€™s properties as well as a group theoretic isomorphism we developed a method that is both faster and less complex than what was proposed before. In addition, it is valid for all relevant cases relating the codeword length N and not only those resulting from N = p

    Prediction based task scheduling in distributed computing

    Full text link

    A STUDY OF ERASURE CORRECTING CODES

    Get PDF
    This work focus on erasure codes, particularly those that of high performance, and the related decoding algorithms, especially with low computational complexity. The work is composed of different pieces, but the main components are developed within the following two main themes. Ideas of message passing are applied to solve the erasures after the transmission. Efficient matrix-representation of the belief propagation (BP) decoding algorithm on the BEG is introduced as the recovery algorithm. Gallager's bit-flipping algorithm are further developed into the guess and multi-guess algorithms especially for the application to recover the unsolved erasures after the recovery algorithm. A novel maximum-likelihood decoding algorithm, the In-place algorithm, is proposed with a reduced computational complexity. A further study on the marginal number of correctable erasures by the In-place algoritinn determines a lower bound of the average number of correctable erasures. Following the spirit in search of the most likable codeword based on the received vector, we propose a new branch-evaluation- search-on-the-code-tree (BESOT) algorithm, which is powerful enough to approach the ML performance for all linear block codes. To maximise the recovery capability of the In-place algorithm in network transmissions, we propose the product packetisation structure to reconcile the computational complexity of the In-place algorithm. Combined with the proposed product packetisation structure, the computational complexity is less than the quadratic complexity bound. We then extend this to application of the Rayleigh fading channel to solve the errors and erasures. By concatenating an outer code, such as BCH codes, the product-packetised RS codes have the performance of the hard-decision In-place algorithm significantly better than that of the soft-decision iterative algorithms on optimally designed LDPC codes
    • ā€¦
    corecore