2,030 research outputs found
Efficient Approaches for Enclosing the United Solution Set of the Interval Generalized Sylvester Matrix Equation
In this work, we investigate the interval generalized Sylvester matrix
equation and develop some
techniques for obtaining outer estimations for the so-called united solution
set of this interval system. First, we propose a modified variant of the
Krawczyk operator which causes reducing computational complexity to cubic,
compared to Kronecker product form. We then propose an iterative technique for
enclosing the solution set. These approaches are based on spectral
decompositions of the midpoints of , , and
and in both of them we suppose that the midpoints of and
are simultaneously diagonalizable as well as for the midpoints of
the matrices and . Some numerical experiments are given to
illustrate the performance of the proposed methods
Low-rank updates and a divide-and-conquer method for linear matrix equations
Linear matrix equations, such as the Sylvester and Lyapunov equations, play
an important role in various applications, including the stability analysis and
dimensionality reduction of linear dynamical control systems and the solution
of partial differential equations. In this work, we present and analyze a new
algorithm, based on tensorized Krylov subspaces, for quickly updating the
solution of such a matrix equation when its coefficients undergo low-rank
changes. We demonstrate how our algorithm can be utilized to accelerate the
Newton method for solving continuous-time algebraic Riccati equations. Our
algorithm also forms the basis of a new divide-and-conquer approach for linear
matrix equations with coefficients that feature hierarchical low-rank
structure, such as HODLR, HSS, and banded matrices. Numerical experiments
demonstrate the advantages of divide-and-conquer over existing approaches, in
terms of computational time and memory consumption
Over-constrained Weierstrass iteration and the nearest consistent system
We propose a generalization of the Weierstrass iteration for over-constrained
systems of equations and we prove that the proposed method is the Gauss-Newton
iteration to find the nearest system which has at least common roots and
which is obtained via a perturbation of prescribed structure. In the univariate
case we show the connection of our method to the optimization problem
formulated by Karmarkar and Lakshman for the nearest GCD. In the multivariate
case we generalize the expressions of Karmarkar and Lakshman, and give
explicitly several iteration functions to compute the optimum.
The arithmetic complexity of the iterations is detailed
Computing the common zeros of two bivariate functions via Bezout resultants
The common zeros of two bivariate functions can be computed by finding the common zeros of their polynomial interpolants expressed in a tensor Chebyshev basis. From here we develop a bivariate rootfinding algorithm based on the hidden variable resultant method and B�ezout matrices with polynomial entries. Using techniques including domain subdivision, B�ezoutian regularization and local refinement we are able to reliably and accurately compute the simple common zeros of two smooth functions with polynomial interpolants of very high degree (� 1000). We analyze the resultant method and its conditioning by noting that the B�ezout matrices are matrix polynomials. Our robust algorithm is implemented in the roots command in Chebfun2, a software package written in object-oriented MATLAB for computing with bivariate functions
- …