84,239 research outputs found

    Hermite matrix in Lagrange basis for scaling static output feedback polynomial matrix inequalities

    Full text link
    Using Hermite's formulation of polynomial stability conditions, static output feedback (SOF) controller design can be formulated as a polynomial matrix inequality (PMI), a (generally nonconvex) nonlinear semidefinite programming problem that can be solved (locally) with PENNON, an implementation of a penalty method. Typically, Hermite SOF PMI problems are badly scaled and experiments reveal that this has a negative impact on the overall performance of the solver. In this note we recall the algebraic interpretation of Hermite's quadratic form as a particular Bezoutian and we use results on polynomial interpolation to express the Hermite PMI in a Lagrange polynomial basis, as an alternative to the conventional power basis. Numerical experiments on benchmark problem instances show the substantial improvement brought by the approach, in terms of problem scaling, number of iterations and convergence behavior of PENNON

    Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for Polynomial Transforms Based on Induction

    Full text link
    A polynomial transform is the multiplication of an input vector x\in\C^n by a matrix \PT_{b,\alpha}\in\C^{n\times n}, whose (k,ℓ)(k,\ell)-th element is defined as pℓ(αk)p_\ell(\alpha_k) for polynomials p_\ell(x)\in\C[x] from a list b={p0(x),
,pn−1(x)}b=\{p_0(x),\dots,p_{n-1}(x)\} and sample points \alpha_k\in\C from a list α={α0,
,αn−1}\alpha=\{\alpha_0,\dots,\alpha_{n-1}\}. Such transforms find applications in the areas of signal processing, data compression, and function interpolation. Important examples include the discrete Fourier and cosine transforms. In this paper we introduce a novel technique to derive fast algorithms for polynomial transforms. The technique uses the relationship between polynomial transforms and the representation theory of polynomial algebras. Specifically, we derive algorithms by decomposing the regular modules of these algebras as a stepwise induction. As an application, we derive novel O(nlog⁥n)O(n\log{n}) general-radix algorithms for the discrete Fourier transform and the discrete cosine transform of type 4.Comment: 19 pages. Submitted to SIAM Journal on Matrix Analysis and Application

    The Leja method revisited: backward error analysis for the matrix exponential

    Get PDF
    The Leja method is a polynomial interpolation procedure that can be used to compute matrix functions. In particular, computing the action of the matrix exponential on a given vector is a typical application. This quantity is required, e.g., in exponential integrators. The Leja method essentially depends on three parameters: the scaling parameter, the location of the interpolation points, and the degree of interpolation. We present here a backward error analysis that allows us to determine these three parameters as a function of the prescribed accuracy. Additional aspects that are required for an efficient and reliable implementation are discussed. Numerical examples that illustrate the performance of our Matlab code are included

    Sparse implicitization by interpolation: Characterizing non-exactness and an application to computing discriminants

    Get PDF
    We revisit implicitization by interpolation in order to examine its properties in the context of sparse elimination theory. Based on the computation of a superset of the implicit support, implicitization is reduced to computing the nullspace of a numeric matrix. The approach is applicable to polynomial and rational parameterizations of curves and (hyper)surfaces of any dimension, including the case of parameterizations with base points. Our support prediction is based on sparse (or toric) resultant theory, in order to exploit the sparsity of the input and the output. Our method may yield a multiple of the implicit equation: we characterize and quantify this situation by relating the nullspace dimension to the predicted support and its geometry. In this case, we obtain more than one multiples of the implicit equation; the latter can be obtained via multivariate polynomial gcd (or factoring). All of the above techniques extend to the case of approximate computation, thus yielding a method of sparse approximate implicitization, which is important in tackling larger problems. We discuss our publicly available Maple implementation through several examples, including the benchmark of bicubic surface. For a novel application, we focus on computing the discriminant of a multivariate polynomial, which characterizes the existence of multiple roots and generalizes the resultant of a polynomial system. This yields an efficient, output-sensitive algorithm for computing the discriminant polynomial

    Polynomial-Division-Based Algorithms for Computing Linear Recurrence Relations

    Get PDF
    Sparse polynomial interpolation, sparse linear system solving or modular rational reconstruction are fundamental problems in Computer Algebra. They come down to computing linear recurrence relations of a sequence with the Berlekamp-Massey algorithm. Likewise, sparse multivariate polynomial interpolation and multidimensional cyclic code decoding require guessing linear recurrence relations of a multivariate sequence.Several algorithms solve this problem. The so-called Berlekamp-Massey-Sakata algorithm (1988) uses polynomial additions and shifts by a monomial. The Scalar-FGLM algorithm (2015) relies on linear algebra operations on a multi-Hankel matrix, a multivariate generalization of a Hankel matrix. The Artinian Gorenstein border basis algorithm (2017) uses a Gram-Schmidt process.We propose a new algorithm for computing the Gr{\"o}bner basis of the ideal of relations of a sequence based solely on multivariate polynomial arithmetic. This algorithm allows us to both revisit the Berlekamp-Massey-Sakata algorithm through the use of polynomial divisions and to completely revise the Scalar-FGLM algorithm without linear algebra operations.A key observation in the design of this algorithm is to work on the mirror of the truncated generating series allowing us to use polynomial arithmetic modulo a monomial ideal. It appears to have some similarities with Pad{\'e} approximants of this mirror polynomial.As an addition from the paper published at the ISSAC conferance, we give an adaptive variant of this algorithm taking into account the shape of the final Gr{\"o}bner basis gradually as it is discovered. The main advantage of this algorithm is that its complexity in terms of operations and sequence queries only depends on the output Gr{\"o}bner basis.All these algorithms have been implemented in Maple and we report on our comparisons

    Nearly Optimal Computations with Structured Matrices

    Get PDF
    We estimate the Boolean complexity of multiplication of structured matrices by a vector and the solution of nonsingular linear systems of equations with these matrices. We study four basic most popular classes, that is, Toeplitz, Hankel, Cauchy and Van-der-monde matrices, for which the cited computational problems are equivalent to the task of polynomial multiplication and division and polynomial and rational multipoint evaluation and interpolation. The Boolean cost estimates for the latter problems have been obtained by Kirrinnis in \cite{kirrinnis-joc-1998}, except for rational interpolation, which we supply now. All known Boolean cost estimates for these problems rely on using Kronecker product. This implies the dd-fold precision increase for the dd-th degree output, but we avoid such an increase by relying on distinct techniques based on employing FFT. Furthermore we simplify the analysis and make it more transparent by combining the representation of our tasks and algorithms in terms of both structured matrices and polynomials and rational functions. This also enables further extensions of our estimates to cover Trummer's important problem and computations with the popular classes of structured matrices that generalize the four cited basic matrix classes.Comment: (2014-04-10
    • 

    corecore