573 research outputs found

    Complexity Analysis of Reed-Solomon Decoding over GF(2^m) Without Using Syndromes

    Get PDF
    For the majority of the applications of Reed-Solomon (RS) codes, hard decision decoding is based on syndromes. Recently, there has been renewed interest in decoding RS codes without using syndromes. In this paper, we investigate the complexity of syndromeless decoding for RS codes, and compare it to that of syndrome-based decoding. Aiming to provide guidelines to practical applications, our complexity analysis differs in several aspects from existing asymptotic complexity analysis, which is typically based on multiplicative fast Fourier transform (FFT) techniques and is usually in big O notation. First, we focus on RS codes over characteristic-2 fields, over which some multiplicative FFT techniques are not applicable. Secondly, due to moderate block lengths of RS codes in practice, our analysis is complete since all terms in the complexities are accounted for. Finally, in addition to fast implementation using additive FFT techniques, we also consider direct implementation, which is still relevant for RS codes with moderate lengths. Comparing the complexities of both syndromeless and syndrome-based decoding algorithms based on direct and fast implementations, we show that syndromeless decoding algorithms have higher complexities than syndrome-based ones for high rate RS codes regardless of the implementation. Both errors-only and errors-and-erasures decoding are considered in this paper. We also derive tighter bounds on the complexities of fast polynomial multiplications based on Cantor's approach and the fast extended Euclidean algorithm.Comment: 11 pages, submitted to EURASIP Journal on Wireless Communications and Networkin

    Quasi-optimal multiplication of linear differential operators

    Get PDF
    We show that linear differential operators with polynomial coefficients over a field of characteristic zero can be multiplied in quasi-optimal time. This answers an open question raised by van der Hoeven.Comment: To appear in the Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS'12

    Fast In-place Algorithms for Polynomial Operations: Division, Evaluation, Interpolation

    Full text link
    We consider space-saving versions of several important operations on univariate polynomials, namely power series inversion and division, division with remainder, multi-point evaluation, and interpolation. Now-classical results show that such problems can be solved in (nearly) the same asymptotic time as fast polynomial multiplication. However, these reductions, even when applied to an in-place variant of fast polynomial multiplication, yield algorithms which require at least a linear amount of extra space for intermediate results. We demonstrate new in-place algorithms for the aforementioned polynomial computations which require only constant extra space and achieve the same asymptotic running time as their out-of-place counterparts. We also provide a precise complexity analysis so that all constants are made explicit, parameterized by the space usage of the underlying multiplication algorithms

    On the complexity of skew arithmetic

    No full text
    13 pagesIn this paper, we study the complexity of several basic operations on linear differential operators with polynomial coefficients. As in the case of ordinary polynomials, we show that these complexities can be expressed in terms of the cost of multiplication

    Faster relaxed multiplication

    No full text
    In previous work, we have introduced several fast algorithms for relaxed power series multiplication (also known under the name on-line multiplication) up till a given order n. The fastest currently known algorithm works over an effective base field K with sufficiently many 2^p-th roots of unity and has algebraic time complexity O(n log n exp (2 sqrt (log 2 log log n))). In this note, we will generalize this algorithm to the cases when K is replaced by an effective ring of positive characteristic or by an effective ring of characteristic zero, which is also torsion-free as a Z-module and comes with an additional algorithm for partial division by integers. We will also present an asymptotically faster algorithm for relaxed multiplication of p-adic numbers

    Elliptic periods for finite fields

    Full text link
    We construct two new families of basis for finite field extensions. Basis in the first family, the so-called elliptic basis, are not quite normal basis, but they allow very fast Frobenius exponentiation while preserving sparse multiplication formulas. Basis in the second family, the so-called normal elliptic basis are normal basis and allow fast (quasi linear) arithmetic. We prove that all extensions admit models of this kind

    Withdrawn paper: fast multiplication of integer matrices

    No full text
    THIS PAPER HAS BEEN WITHDRAWN. We briefly discuss the error which was made in the original version of the withdrawn paper. Original abstract: in this paper we will show that dense n×n matrices with integer coefficients of bit sizes ⩽b can be multiplied in quasi-optimal time. This shows that the exponent ω_ℤ for matrix multiplication over ℤ is equal to two. Moreover, there is hope that the exponent can be observed in practice for a sufficiently good implementation
    • …
    corecore