6,916 research outputs found

    Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for Polynomial Transforms Based on Induction

    Full text link
    A polynomial transform is the multiplication of an input vector x\in\C^n by a matrix \PT_{b,\alpha}\in\C^{n\times n}, whose (k,)(k,\ell)-th element is defined as p(αk)p_\ell(\alpha_k) for polynomials p_\ell(x)\in\C[x] from a list b={p0(x),,pn1(x)}b=\{p_0(x),\dots,p_{n-1}(x)\} and sample points \alpha_k\in\C from a list α={α0,,αn1}\alpha=\{\alpha_0,\dots,\alpha_{n-1}\}. Such transforms find applications in the areas of signal processing, data compression, and function interpolation. Important examples include the discrete Fourier and cosine transforms. In this paper we introduce a novel technique to derive fast algorithms for polynomial transforms. The technique uses the relationship between polynomial transforms and the representation theory of polynomial algebras. Specifically, we derive algorithms by decomposing the regular modules of these algebras as a stepwise induction. As an application, we derive novel O(nlogn)O(n\log{n}) general-radix algorithms for the discrete Fourier transform and the discrete cosine transform of type 4.Comment: 19 pages. Submitted to SIAM Journal on Matrix Analysis and Application

    On Polynomial Multiplication in Chebyshev Basis

    Full text link
    In a recent paper Lima, Panario and Wang have provided a new method to multiply polynomials in Chebyshev basis which aims at reducing the total number of multiplication when polynomials have small degree. Their idea is to use Karatsuba's multiplication scheme to improve upon the naive method but without being able to get rid of its quadratic complexity. In this paper, we extend their result by providing a reduction scheme which allows to multiply polynomial in Chebyshev basis by using algorithms from the monomial basis case and therefore get the same asymptotic complexity estimate. Our reduction allows to use any of these algorithms without converting polynomials input to monomial basis which therefore provide a more direct reduction scheme then the one using conversions. We also demonstrate that our reduction is efficient in practice, and even outperform the performance of the best known algorithm for Chebyshev basis when polynomials have large degree. Finally, we demonstrate a linear time equivalence between the polynomial multiplication problem under monomial basis and under Chebyshev basis

    Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for DCTs and DSTs

    Full text link
    This paper presents a systematic methodology based on the algebraic theory of signal processing to classify and derive fast algorithms for linear transforms. Instead of manipulating the entries of transform matrices, our approach derives the algorithms by stepwise decomposition of the associated signal models, or polynomial algebras. This decomposition is based on two generic methods or algebraic principles that generalize the well-known Cooley-Tukey FFT and make the algorithms' derivations concise and transparent. Application to the 16 discrete cosine and sine transforms yields a large class of fast algorithms, many of which have not been found before.Comment: 31 pages, more information at http://www.ece.cmu.edu/~smar

    New Algorithms for Computing a Single Component of the Discrete Fourier Transform

    Full text link
    This paper introduces the theory and hardware implementation of two new algorithms for computing a single component of the discrete Fourier transform. In terms of multiplicative complexity, both algorithms are more efficient, in general, than the well known Goertzel Algorithm.Comment: 4 pages, 3 figures, 1 table. In: 10th International Symposium on Communication Theory and Applications, Ambleside, U

    A new Truncated Fourier Transform algorithm

    Full text link
    Truncated Fourier Transforms (TFTs), first introduced by Van der Hoeven, refer to a family of algorithms that attempt to smooth "jumps" in complexity exhibited by FFT algorithms. We present an in-place TFT whose time complexity, measured in terms of ring operations, is comparable to existing not-in-place TFT methods. We also describe a transformation that maps between two families of TFT algorithms that use different sets of evaluation points.Comment: 8 pages, submitted to the 38th International Symposium on Symbolic and Algebraic Computation (ISSAC 2013

    Fast integer multiplication using generalized Fermat primes

    Get PDF
    For almost 35 years, Sch{\"o}nhage-Strassen's algorithm has been the fastest algorithm known for multiplying integers, with a time complexity O(n ×\times log n ×\times log log n) for multiplying n-bit inputs. In 2007, F{\"u}rer proved that there exists K > 1 and an algorithm performing this operation in O(n ×\times log n ×\times K log n). Recent work by Harvey, van der Hoeven, and Lecerf showed that this complexity estimate can be improved in order to get K = 8, and conjecturally K = 4. Using an alternative algorithm, which relies on arithmetic modulo generalized Fermat primes, we obtain conjecturally the same result K = 4 via a careful complexity analysis in the deterministic multitape Turing model

    Faster polynomial multiplication over finite fields

    Full text link
    Let p be a prime, and let M_p(n) denote the bit complexity of multiplying two polynomials in F_p[X] of degree less than n. For n large compared to p, we establish the bound M_p(n) = O(n log n 8^(log^* n) log p), where log^* is the iterated logarithm. This is the first known F\"urer-type complexity bound for F_p[X], and improves on the previously best known bound M_p(n) = O(n log n log log n log p)
    corecore