6,427 research outputs found

    On Polynomial Multiplication in Chebyshev Basis

    Full text link
    In a recent paper Lima, Panario and Wang have provided a new method to multiply polynomials in Chebyshev basis which aims at reducing the total number of multiplication when polynomials have small degree. Their idea is to use Karatsuba's multiplication scheme to improve upon the naive method but without being able to get rid of its quadratic complexity. In this paper, we extend their result by providing a reduction scheme which allows to multiply polynomial in Chebyshev basis by using algorithms from the monomial basis case and therefore get the same asymptotic complexity estimate. Our reduction allows to use any of these algorithms without converting polynomials input to monomial basis which therefore provide a more direct reduction scheme then the one using conversions. We also demonstrate that our reduction is efficient in practice, and even outperform the performance of the best known algorithm for Chebyshev basis when polynomials have large degree. Finally, we demonstrate a linear time equivalence between the polynomial multiplication problem under monomial basis and under Chebyshev basis

    Chebyshev model arithmetic for factorable functions

    Get PDF
    This article presents an arithmetic for the computation of Chebyshev models for factorable functions and an analysis of their convergence properties. Similar to Taylor models, Chebyshev models consist of a pair of a multivariate polynomial approximating the factorable function and an interval remainder term bounding the actual gap with this polynomial approximant. Propagation rules and local convergence bounds are established for the addition, multiplication and composition operations with Chebyshev models. The global convergence of this arithmetic as the polynomial expansion order increases is also discussed. A generic implementation of Chebyshev model arithmetic is available in the library MC++. It is shown through several numerical case studies that Chebyshev models provide tighter bounds than their Taylor model counterparts, but this comes at the price of extra computational burden

    A fast and well-conditioned spectral method

    Get PDF
    A novel spectral method is developed for the direct solution of linear ordinary differential equations with variable coefficients. The method leads to matrices which are almost banded, and a numerical solver is presented that takes O(m2n)O(m^{2}n) operations, where mm is the number of Chebyshev points needed to resolve the coefficients of the differential operator and nn is the number of Chebyshev points needed to resolve the solution to the differential equation. We prove stability of the method by relating it to a diagonally preconditioned system which has a bounded condition number, in a suitable norm. For Dirichlet boundary conditions, this reduces to stability in the standard 2-norm

    Chebyshev polynomials and the Frohman-Gelca formula

    Get PDF
    Using Chebyshev polynomials, C. Frohman and R. Gelca introduce a basis of the Kauffman bracket skein module of the torus. This basis is especially useful because the Jones-Kauffman product can be described via a very simple Product-to-Sum formula. Presented in this work is a diagrammatic proof of this formula, which emphasizes and demystifies the role played by Chebyshev polynomials.Comment: 13 page

    Phases of N=1 Supersymmetric SO/Sp Gauge Theories via Matrix Model

    Get PDF
    We extend the results of Cachazo, Seiberg and Witten to N=1 supersymmetric gauge theories with gauge groups SO(2N), SO(2N+1) and Sp(2N). By taking the superpotential which is an arbitrary polynomial of adjoint matter \Phi as a small perturbation of N=2 gauge theories, we examine the singular points preserving N=1 supersymmetry in the moduli space where mutually local monopoles become massless. We derive the matrix model complex curve for the whole range of the degree of perturbed superpotential. Then we determine a generalized Konishi anomaly equation implying the orientifold contribution. We turn to the multiplication map and the confinement index K and describe both Coulomb branch and confining branch. In particular, we construct a multiplication map from SO(2N+1) to SO(2KN-K+2) where K is an even integer as well as a multiplication map from SO(2N) to SO(2KN-2K+2) (K is a positive integer), a map from SO(2N+1) to SO(2KN-K+2) (K is an odd integer) and a map from Sp(2N) to Sp(2KN+2K-2). Finally we analyze some examples which show some duality: the same moduli space has two different semiclassical limits corresponding to distinct gauge groups.Comment: 55pp; two paragraphs in page 19 added to clarify the relation between confinement index and multiplication map index, refs added and to appear in JHEP; Konishi anomaly equations corrected and some comments on the degenerated cases for SO(7) and SO(8) adde

    A low multiplicative complexity fast recursive DCT-2 algorithm

    Full text link
    A fast Discrete Cosine Transform (DCT) algorithm is introduced that can be of particular interest in image processing. The main features of the algorithm are regularity of the graph and very low arithmetic complexity. The 16-point version of the algorithm requires only 32 multiplications and 81 additions. The computational core of the algorithm consists of only 17 nontrivial multiplications, the rest 15 are scaling factors that can be compensated in the post-processing. The derivation of the algorithm is based on the algebraic signal processing theory (ASP).Comment: 4 pages, 2 figure

    Polynomial Tensor Sketch for Element-wise Function of Low-Rank Matrix

    Get PDF
    This paper studies how to sketch element-wise functions of low-rank matrices. Formally, given low-rank matrix A = [Aij] and scalar non-linear function f, we aim for finding an approximated low-rank representation of the (possibly high-rank) matrix [f(Aij)]. To this end, we propose an efficient sketching-based algorithm whose complexity is significantly lower than the number of entries of A, i.e., it runs without accessing all entries of [f(Aij)] explicitly. The main idea underlying our method is to combine a polynomial approximation of f with the existing tensor sketch scheme for approximating monomials of entries of A. To balance the errors of the two approximation components in an optimal manner, we propose a novel regression formula to find polynomial coefficients given A and f. In particular, we utilize a coreset-based regression with a rigorous approximation guarantee. Finally, we demonstrate the applicability and superiority of the proposed scheme under various machine learning tasks
    corecore