1,573 research outputs found

    Fast Approximate Computations with Cauchy Matrices and Polynomials

    Full text link
    Multipoint polynomial evaluation and interpolation are fundamental for modern symbolic and numerical computing. The known algorithms solve both problems over any field of constants in nearly linear arithmetic time, but the cost grows to quadratic for numerical solution. We fix this discrepancy: our new numerical algorithms run in nearly linear arithmetic time. At first we restate our goals as the multiplication of an n-by-n Vandermonde matrix by a vector and the solution of a Vandermonde linear system of n equations. Then we transform the matrix into a Cauchy structured matrix with some special features. By exploiting them, we approximate the matrix by a generalized hierarchically semiseparable matrix, which is a structured matrix of a different class. Finally we accelerate our solution to the original problems by applying Fast Multipole Method to the latter matrix. Our resulting numerical algorithms run in nearly optimal arithmetic time when they perform the above fundamental computations with polynomials, Vandermonde matrices, transposed Vandermonde matrices, and a large class of Cauchy and Cauchy-like matrices. Some of our techniques may be of independent interest.Comment: 31 pages, 7 figures, 9 table

    Structured inversion of the Bernstein-Vandermonde Matrix

    Full text link
    Bernstein polynomials, long a staple of approximation theory and computational geometry, have also increasingly become of interest in finite element methods. Many fundamental problems in interpolation and approximation give rise to interesting linear algebra questions. When attempting to find a polynomial approximation of boundary or initial data, one encounters the Bernstein-Vandermonde matrix, which is found to be highly ill-conditioned. Previously, we used the relationship between monomial Bezout matrices and the inverse of Hankel matrices to obtain a decomposition of the inverse of the Bernstein mass matrix in terms of Hankel, Toeplitz, and diagonal matrices. In this paper, we use properties of the Bernstein-Bezout matrix to factor the inverse of the Bernstein-Vandermonde matrix into a difference of products of Hankel, Toeplitz, and diagonal matrices. We also use a nonstandard matrix norm to study the conditioning of the Bernstein-Vandermonde matrix, showing that the conditioning in this case is better than in the standard 2-norm. Additionally, we use properties of multivariate Bernstein polynomials to derive a block LULU decomposition of the Bernstein-Vandermonde matrix corresponding to equispaced nodes on the dd-simplex.Comment: 21 pages, 4 figure

    Structured matrices in the application of bivariate interpolation to curve implicitization

    Full text link
    A nonstandard application of bivariate polynomial interpolation is discussed: the implicitization of a rational algebraic curve given by its parametric equations. Three different approaches using the same interpolation space are considered, and their respective computational complexities are analyzed. Although the techniques employed are usually asociated to numerical analysis, in this case all the computations are carried out using exact rational arithmetic. The power of the Kronecker product of matrices in this application is stressed.Comment: 13 page

    A fast and accurate algorithm for solving Bernstein-Vandermonde linear sytem

    Full text link
    A fast and accurate algorithm for solving a Bernstein-Vandermonde linear system is presented. The algorithm is derived by using results related to the bidiagonal decomposition of the inverse of a totally positive matrix by means of Neville elimination. The use of explicit expressions for the determinants involved in the process serves to make the algorithm both fast and accurate.Comment: 13 pages. We have extended the numerical experiment

    On applying the maximum volume principle to a basis selection problem in multivariate polynomial interpolation

    Full text link
    The maximum volume principle is investigated as a means to solve the following problem: Given a set of arbitrary interpolation nodes, how to choose a set of polynomial basis functions for which the Lagrange interpolation problem is well-defined with reasonable interpolation error? The interpolation error is controlled by the Lebesgue constant of multivariate polynomial interpolation and it is proven that the Lebesgue constant can effectively be bounded by the reciprocals of the volume (i.e., determinant in modulus) and the minimal singular value of the multidimensional Vandermonde matrix associated with the interpolation problem. This suggests that a large volume of the Vandermonde system can be used as an indicator of accuracy and stability of the resulting interpolating polynomial. Numerical examples demonstrate that the approach outlined in this paper works remarkably well in practical computations.Comment: 18 pages, 6 figure

    Transformations of Matrix Structures Work Again

    Full text link
    In 1989 we proposed to employ Vandermonde and Hankel multipliers to transform into each other the matrix structures of Toeplitz, Hankel, Vandermonde and Cauchy types as a means of extending any successful algorithm for the inversion of matrices having one of these structures to inverting the matrices with the structures of the three other types. Surprising power of this approach has been demonstrated in a number of works, which culminated in ingeneous numerically stable algorithms that approximated the solution of a nonsingular Toeplitz linear system in nearly linear (versus previuosly cubic) arithmetic time. We first revisit this powerful method, covering it comprehensively, and then specialize it to yield a similar acceleration of the known algorithms for computations with matrices having structures of Vandermonde or Cauchy types. In particular we arrive at numerically stable approximate multipoint polynomial evaluation and interpolation in nearly linear arithmetic time.Comment: 20 page

    Confluent Vandermonde matrices, divided differences, and Lagrange-Hermite interpolation over quaternions

    Full text link
    We introduce the notion of a confluent Vandermonde matrix with quaternion entries and discuss its connection with Lagrange-Hermite interpolation over quaternions. Further results include the formula for the rank of a confluent Vandermonde matrix, the representation formula for divided differences of quaternion polynomials and their extensions to the formal power series setting

    On mm-dimensional toric codes

    Full text link
    Toric codes are a class of mm-dimensional cyclic codes introduced recently by J. Hansen. They may be defined as evaluation codes obtained from monomials corresponding to integer lattice points in an integral convex polytope PβŠ†RmP \subseteq \R^m. As such, they are in a sense a natural extension of Reed-Solomon codes. Several authors have used intersection theory on toric surfaces to derive bounds on the minimum distance of some toric codes with m=2m = 2. In this paper, we will provide a more elementary approach that applies equally well to many toric codes for all mβ‰₯2m \ge 2. Our methods are based on a sort of multivariate generalization of Vandermonde determinants that has also been used in the study of multivariate polynomial interpolation. We use these Vandermonde determinants to determine the minimum distance of toric codes from rectangular polytopes and simplices. We also prove a general result showing that if there is a unimodular integer affine transformation taking one polytope P1P_1 to a second polytope P2P_2, then the corresponding toric codes are monomially equivalent (hence have the same parameters). We use this to begin a classification of two-dimensional toric codes with small dimension.Comment: 17 pages, 4 figures; typos correcte

    A N-Body Solver for Free Mesh Interpolation

    Full text link
    Factorization of the Gaussian RBF kernel is developed for free-mesh interpolation in the flat, polynomial limit corresponding to Taylor expansion and the Vandermonde basis of geometric moments. With this spectral approximation, a top-down octree-scoping of an interpolant is found by recursively decomposing the residual, similar to the work of Driscoll and Heryudono (2007), except that in the current approach the grid is decoupled from the low rank approximation, allowing partial separation of sampling errors (the mesh) from representation errors (the polynomial order). Then, it is possible to demonstrate roughly 5 orders of magnitude improvement in free-mesh interpolation errors for the three-dimensional Franke function, relative to previous benchmarks. As in related work on NN-body methods for factorization by square root iteration (Challacombe 2015), some emphasis is placed on resolution of the identity

    Inheritance Properties and Sum-of-Squares Decomposition of Hankel Tensors: Theory and Algorithms

    Full text link
    In this paper, we show that if a lower-order Hankel tensor is positive semi-definite (or positive definite, or negative semi-definite, or negative definite, or SOS), then its associated higher-order Hankel tensor with the same generating vector, where the higher order is a multiple of the lower order, is also positive semi-definite (or positive definite, or negative semi-definite, or negative definite, or SOS, respectively). Furthermore, in this case, the extremal H-eigenvalues of the higher order tensor are bounded by the extremal H-eigenvalues of the lower order tensor, multiplied with some constants. Based on this inheritance property, we give a concrete sum-of-squares decomposition for each strong Hankel tensor. Then we prove the second inheritance property of Hankel tensors, i.e., a Hankel tensor has no negative (or non-positive, or positive, or nonnegative) H-eigenvalues if the associated Hankel matrix of that Hankel tensor has no negative (or non-positive, or positive, or nonnegative, respectively) eigenvalues. In this case, the extremal H-eigenvalues of the Hankel tensor are also bounded by the extremal eigenvalues of the associated Hankel matrix, multiplied with some constants. The third inheritance property of Hankel tensors is raised as a conjecture
    • …
    corecore