100 research outputs found
Quadratic Residue Code
The algebraic decoding of binary quadratic residue codes can be performed using the Peterson or the Berlekamp-Massey algorithm once certain unknown syndromes are determined or eliminated. The technique of determining unknown syndromes is applied to the nonbinary case to decode the expurgated ternary quadratic residue code of length 23
On formulas for decoding binary cyclic codes
We adress the problem of the algebraic decoding of any cyclic code up to the
true minimum distance. For this, we use the classical formulation of the
problem, which is to find the error locator polynomial in terms of the syndroms
of the received word. This is usually done with the Berlekamp-Massey algorithm
in the case of BCH codes and related codes, but for the general case, there is
no generic algorithm to decode cyclic codes. Even in the case of the quadratic
residue codes, which are good codes with a very strong algebraic structure,
there is no available general decoding algorithm. For this particular case of
quadratic residue codes, several authors have worked out, by hand, formulas for
the coefficients of the locator polynomial in terms of the syndroms, using the
Newton identities. This work has to be done for each particular quadratic
residue code, and is more and more difficult as the length is growing.
Furthermore, it is error-prone. We propose to automate these computations,
using elimination theory and Grbner bases. We prove that, by computing
appropriate Grbner bases, one automatically recovers formulas for the
coefficients of the locator polynomial, in terms of the syndroms
On the Decoding Complexity of Cyclic Codes Up to the BCH Bound
The standard algebraic decoding algorithm of cyclic codes up to the
BCH bound is very efficient and practical for relatively small while it
becomes unpractical for large as its computational complexity is .
Aim of this paper is to show how to make this algebraic decoding
computationally more efficient: in the case of binary codes, for example, the
complexity of the syndrome computation drops from to , and
that of the error location from to at most .Comment: accepted for publication in Proceedings ISIT 2011. IEEE copyrigh
An efficient combination between Berlekamp-Massey and Hartmann Rudolph algorithms to decode BCH codes
In digital communication and storage systems, the exchange of data is achieved using a communication channel which is not completely reliable. Therefore, detection and correction of possible errors are required by adding redundant bits to information data. Several algebraic and heuristic decoders were designed to detect and correct errors. The Hartmann Rudolph (HR) algorithm enables to decode a sequence symbol by symbol. The HR algorithm has a high complexity, that's why we suggest using it partially with the algebraic hard decision decoder Berlekamp-Massey (BM).
In this work, we propose a concatenation of Partial Hartmann Rudolph (PHR) algorithm and Berlekamp-Massey decoder to decode BCH (Bose-Chaudhuri-Hocquenghem) codes. Very satisfying results are obtained. For example, we have used only 0.54% of the dual space size for the BCH code (63,39,9) while maintaining very good decoding quality. To judge our results, we compare them with other decoders
Decoding Generalized Reed-Solomon Codes and Its Application to RLCE Encryption Schemes
This paper compares the efficiency of various algorithms for implementing
quantum resistant public key encryption scheme RLCE on 64-bit CPUs. By
optimizing various algorithms for polynomial and matrix operations over finite
fields, we obtained several interesting (or even surprising) results. For
example, it is well known (e.g., Moenck 1976 \cite{moenck1976practical}) that
Karatsuba's algorithm outperforms classical polynomial multiplication algorithm
from the degree 15 and above (practically, Karatsuba's algorithm only
outperforms classical polynomial multiplication algorithm from the degree 35
and above ). Our experiments show that 64-bit optimized Karatsuba's algorithm
will only outperform 64-bit optimized classical polynomial multiplication
algorithm for polynomials of degree 115 and above over finite field
. The second interesting (surprising) result shows that 64-bit
optimized Chien's search algorithm ourperforms all other 64-bit optimized
polynomial root finding algorithms such as BTA and FFT for polynomials of all
degrees over finite field . The third interesting (surprising)
result shows that 64-bit optimized Strassen matrix multiplication algorithm
only outperforms 64-bit optimized classical matrix multiplication algorithm for
matrices of dimension 750 and above over finite field . It should
be noted that existing literatures and practices recommend Strassen matrix
multiplication algorithm for matrices of dimension 40 and above. All our
experiments are done on a 64-bit MacBook Pro with i7 CPU and single thread C
codes. It should be noted that the reported results should be appliable to 64
or larger bits CPU architectures. For 32 or smaller bits CPUs, these results
may not be applicable. The source code and library for the algorithms covered
in this paper are available at http://quantumca.org/
Efficient Decoding of Gabidulin Codes over Galois Rings
This paper presents the first decoding algorithm for Gabidulin codes over
Galois rings with provable quadratic complexity. The new method consists of two
steps: (1) solving a syndrome-based key equation to obtain the annihilator
polynomial of the error and therefore the column space of the error, (2)
solving a key equation based on the received word in order to reconstruct the
error vector. This two-step approach became necessary since standard solutions
as the Euclidean algorithm do not properly work over rings
Complexity Analysis of Reed-Solomon Decoding over GF(2^m) Without Using Syndromes
For the majority of the applications of Reed-Solomon (RS) codes, hard
decision decoding is based on syndromes. Recently, there has been renewed
interest in decoding RS codes without using syndromes. In this paper, we
investigate the complexity of syndromeless decoding for RS codes, and compare
it to that of syndrome-based decoding. Aiming to provide guidelines to
practical applications, our complexity analysis differs in several aspects from
existing asymptotic complexity analysis, which is typically based on
multiplicative fast Fourier transform (FFT) techniques and is usually in big O
notation. First, we focus on RS codes over characteristic-2 fields, over which
some multiplicative FFT techniques are not applicable. Secondly, due to
moderate block lengths of RS codes in practice, our analysis is complete since
all terms in the complexities are accounted for. Finally, in addition to fast
implementation using additive FFT techniques, we also consider direct
implementation, which is still relevant for RS codes with moderate lengths.
Comparing the complexities of both syndromeless and syndrome-based decoding
algorithms based on direct and fast implementations, we show that syndromeless
decoding algorithms have higher complexities than syndrome-based ones for high
rate RS codes regardless of the implementation. Both errors-only and
errors-and-erasures decoding are considered in this paper. We also derive
tighter bounds on the complexities of fast polynomial multiplications based on
Cantor's approach and the fast extended Euclidean algorithm.Comment: 11 pages, submitted to EURASIP Journal on Wireless Communications and
Networkin
- …