979 research outputs found

    A Method to determine Partial Weight Enumerator for Linear Block Codes

    Get PDF
    In this paper we present a fast and efficient method to find partial weight enumerator (PWE) for binary linear block codes by using the error impulse technique and Monte Carlo method. This PWE can be used to compute an upper bound of the error probability for the soft decision maximum likelihood decoder (MLD). As application of this method we give partial weight enumerators and analytical performances of the BCH(130,66), BCH(103,47) and BCH(111,55) shortened codes; the first code is obtained by shortening the binary primitive BCH (255,191,17) code and the two other codes are obtained by shortening the binary primitive BCH(127,71,19) code. The weight distributions of these three codes are unknown at our knowledge.Comment: Computer Engineering and Intelligent Systems Vol 3, No.11, 201

    Subquadratic time encodable codes beating the Gilbert-Varshamov bound

    Full text link
    We construct explicit algebraic geometry codes built from the Garcia-Stichtenoth function field tower beating the Gilbert-Varshamov bound for alphabet sizes at least 192. Messages are identied with functions in certain Riemann-Roch spaces associated with divisors supported on multiple places. Encoding amounts to evaluating these functions at degree one places. By exploiting algebraic structures particular to the Garcia-Stichtenoth tower, we devise an intricate deterministic \omega/2 < 1.19 runtime exponent encoding and 1+\omega/2 < 2.19 expected runtime exponent randomized (unique and list) decoding algorithms. Here \omega < 2.373 is the matrix multiplication exponent. If \omega = 2, as widely believed, the encoding and decoding runtimes are respectively nearly linear and nearly quadratic. Prior to this work, encoding (resp. decoding) time of code families beating the Gilbert-Varshamov bound were quadratic (resp. cubic) or worse

    On the Computing of the Minimum Distance of Linear Block Codes by Heuristic Methods

    Full text link
    The evaluation of the minimum distance of linear block codes remains an open problem in coding theory, and it is not easy to determine its true value by classical methods, for this reason the problem has been solved in the literature with heuristic techniques such as genetic algorithms and local search algorithms. In this paper we propose two approaches to attack the hardness of this problem. The first approach is based on genetic algorithms and it yield to good results comparing to another work based also on genetic algorithms. The second approach is based on a new randomized algorithm which we call Multiple Impulse Method MIM, where the principle is to search codewords locally around the all-zero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum Hamming-weight codeword whose Hamming weight is equal to the minimum distance of the linear code

    A STUDY OF LINEAR ERROR CORRECTING CODES

    Get PDF
    Since Shannon's ground-breaking work in 1948, there have been two main development streams of channel coding in approaching the limit of communication channels, namely classical coding theory which aims at designing codes with large minimum Hamming distance and probabilistic coding which places the emphasis on low complexity probabilistic decoding using long codes built from simple constituent codes. This work presents some further investigations in these two channel coding development streams. Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents and Mattson-Solomon polynomials, and are complementary to each other. The two methods generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and projective geometry codes. Their extension to non binary fields is shown to be straightforward. These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative decoding. It is also shown that for some of these codes, maximum likelihood performance may be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords of the dual code for each iteration. Following a property of the revolving-door combination generator, multi-threaded minimum Hamming distance computation algorithms are developed. Using these algorithms, the previously unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated. In addition, the highest minimum Hamming distance attainable by all binary cyclic codes of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes which have higher minimum Hamming distance than the previously considered best known linear code have been found. It is shown that by exploiting the structure of circulant matrices, the number of codewords required, to compute the minimum Hamming distance and the number of codewords of a given Hamming weight of binary double-circulant codes based on primes, may be reduced. A means of independently verifying the exhaustively computed number of codewords of a given Hamming weight of these double-circulant codes is developed and in coiyunction with this, it is proved that some published results are incorrect and the correct weight spectra are presented. Moreover, it is shown that it is possible to estimate the minimum Hamming distance of this family of prime-based double-circulant codes. It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection mechanism that offers much better throughput and performance than the conventional ORG scheme is described. Using the same method it is shown that the performance of conventional CRC scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy communications system and it is shown that sequences of good error correction codes, suitable for use in incremental redundancy communications systems may be obtained using the Constructions X and XX. Examples are given and their performances presented in comparison to conventional CRC schemes

    Developing Efficient Algorithms of Decoding the Systematic Quadratic Residue Code with Lookup Tables

    Get PDF
    The lookup table methods for decoding binary systematic Quadratic Residue (QR) code are presented in this paper. The key ideas behind this decoding technique are based on one to one corresponding mapping between the syndromes and the correctable error patterns. Such algorithms determine the error locations directly by lookup tables without the operations of addition and multiplication over a finite field. Moreover, the methods to dramatically reduce the memory requirement by shift-search decoding are utilized. Two new algorithm have been verified through a software simulation in C language. The new approach is modular, regular and naturally suitable for System on Chip (SOC) software implementation

    A STUDY OF ERASURE CORRECTING CODES

    Get PDF
    This work focus on erasure codes, particularly those that of high performance, and the related decoding algorithms, especially with low computational complexity. The work is composed of different pieces, but the main components are developed within the following two main themes. Ideas of message passing are applied to solve the erasures after the transmission. Efficient matrix-representation of the belief propagation (BP) decoding algorithm on the BEG is introduced as the recovery algorithm. Gallager's bit-flipping algorithm are further developed into the guess and multi-guess algorithms especially for the application to recover the unsolved erasures after the recovery algorithm. A novel maximum-likelihood decoding algorithm, the In-place algorithm, is proposed with a reduced computational complexity. A further study on the marginal number of correctable erasures by the In-place algoritinn determines a lower bound of the average number of correctable erasures. Following the spirit in search of the most likable codeword based on the received vector, we propose a new branch-evaluation- search-on-the-code-tree (BESOT) algorithm, which is powerful enough to approach the ML performance for all linear block codes. To maximise the recovery capability of the In-place algorithm in network transmissions, we propose the product packetisation structure to reconcile the computational complexity of the In-place algorithm. Combined with the proposed product packetisation structure, the computational complexity is less than the quadratic complexity bound. We then extend this to application of the Rayleigh fading channel to solve the errors and erasures. By concatenating an outer code, such as BCH codes, the product-packetised RS codes have the performance of the hard-decision In-place algorithm significantly better than that of the soft-decision iterative algorithms on optimally designed LDPC codes

    Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications

    Get PDF
    Coding; Communications; Engineering; Networks; Information Theory; Algorithm

    An efficient combination between Berlekamp-Massey and Hartmann Rudolph algorithms to decode BCH codes

    Get PDF
    In digital communication and storage systems, the exchange of data is achieved using a communication channel which is not completely reliable. Therefore, detection and correction of possible errors are required by adding redundant bits to information data. Several algebraic and heuristic decoders were designed to detect and correct errors. The Hartmann Rudolph (HR) algorithm enables to decode a sequence symbol by symbol. The HR algorithm has a high complexity, that's why we suggest using it partially with the algebraic hard decision decoder Berlekamp-Massey (BM). In this work, we propose a concatenation of Partial Hartmann Rudolph (PHR) algorithm and Berlekamp-Massey decoder to decode BCH (Bose-Chaudhuri-Hocquenghem) codes. Very satisfying results are obtained. For example, we have used only 0.54% of the dual space size for the BCH code (63,39,9) while maintaining very good decoding quality. To judge our results, we compare them with other decoders

    Self-Dual Codes

    Get PDF
    Self-dual codes are important because many of the best codes known are of this type and they have a rich mathematical theory. Topics covered in this survey include codes over F_2, F_3, F_4, F_q, Z_4, Z_m, shadow codes, weight enumerators, Gleason-Pierce theorem, invariant theory, Gleason theorems, bounds, mass formulae, enumeration, extremal codes, open problems. There is a comprehensive bibliography.Comment: 136 page
    corecore