334 research outputs found
Complexity Analysis of Reed-Solomon Decoding over GF(2^m) Without Using Syndromes
For the majority of the applications of Reed-Solomon (RS) codes, hard
decision decoding is based on syndromes. Recently, there has been renewed
interest in decoding RS codes without using syndromes. In this paper, we
investigate the complexity of syndromeless decoding for RS codes, and compare
it to that of syndrome-based decoding. Aiming to provide guidelines to
practical applications, our complexity analysis differs in several aspects from
existing asymptotic complexity analysis, which is typically based on
multiplicative fast Fourier transform (FFT) techniques and is usually in big O
notation. First, we focus on RS codes over characteristic-2 fields, over which
some multiplicative FFT techniques are not applicable. Secondly, due to
moderate block lengths of RS codes in practice, our analysis is complete since
all terms in the complexities are accounted for. Finally, in addition to fast
implementation using additive FFT techniques, we also consider direct
implementation, which is still relevant for RS codes with moderate lengths.
Comparing the complexities of both syndromeless and syndrome-based decoding
algorithms based on direct and fast implementations, we show that syndromeless
decoding algorithms have higher complexities than syndrome-based ones for high
rate RS codes regardless of the implementation. Both errors-only and
errors-and-erasures decoding are considered in this paper. We also derive
tighter bounds on the complexities of fast polynomial multiplications based on
Cantor's approach and the fast extended Euclidean algorithm.Comment: 11 pages, submitted to EURASIP Journal on Wireless Communications and
Networkin
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area
Study of application of practical performance criteria for the implementation of efficient error-reduction coding Final report
Criteria for implementation of efficient error reduction codin
Sub-quadratic Decoding of One-point Hermitian Codes
We present the first two sub-quadratic complexity decoding algorithms for
one-point Hermitian codes. The first is based on a fast realisation of the
Guruswami-Sudan algorithm by using state-of-the-art algorithms from computer
algebra for polynomial-ring matrix minimisation. The second is a Power decoding
algorithm: an extension of classical key equation decoding which gives a
probabilistic decoding algorithm up to the Sudan radius. We show how the
resulting key equations can be solved by the same methods from computer
algebra, yielding similar asymptotic complexities.Comment: New version includes simulation results, improves some complexity
results, as well as a number of reviewer corrections. 20 page
A Rank-Metric Approach to Error Control in Random Network Coding
The problem of error control in random linear network coding is addressed
from a matrix perspective that is closely related to the subspace perspective
of K\"otter and Kschischang. A large class of constant-dimension subspace codes
is investigated. It is shown that codes in this class can be easily constructed
from rank-metric codes, while preserving their distance properties. Moreover,
it is shown that minimum distance decoding of such subspace codes can be
reformulated as a generalized decoding problem for rank-metric codes where
partial information about the error is available. This partial information may
be in the form of erasures (knowledge of an error location but not its value)
and deviations (knowledge of an error value but not its location). Taking
erasures and deviations into account (when they occur) strictly increases the
error correction capability of a code: if erasures and
deviations occur, then errors of rank can always be corrected provided that
, where is the minimum rank distance of the
code. For Gabidulin codes, an important family of maximum rank distance codes,
an efficient decoding algorithm is proposed that can properly exploit erasures
and deviations. In a network coding application where packets of length
over are transmitted, the complexity of the decoding algorithm is given
by operations in an extension field .Comment: Minor corrections; 42 pages, to be published at the IEEE Transactions
on Information Theor
- …