1,434 research outputs found
GPGCD: An iterative method for calculating approximate GCD of univariate polynomials
We present an iterative algorithm for calculating approximate greatest common
divisor (GCD) of univariate polynomials with the real or the complex
coefficients. For a given pair of polynomials and a degree, our algorithm finds
a pair of polynomials which has a GCD of the given degree and whose
coefficients are perturbed from those in the original inputs, making the
perturbations as small as possible, along with the GCD. The problem of
approximate GCD is transfered to a constrained minimization problem, then
solved with the so-called modified Newton method, which is a generalization of
the gradient-projection method, by searching the solution iteratively. We
demonstrate that, in some test cases, our algorithm calculates approximate GCD
with perturbations as small as those calculated by a method based on the
structured total least norm (STLN) method and the UVGCD method, while our
method runs significantly faster than theirs by approximately up to 30 or 10
times, respectively, compared with their implementation. We also show that our
algorithm properly handles some ill-conditioned polynomials which have a GCD
with small or large leading coefficient.Comment: Preliminary versions have been presented as
doi:10.1145/1576702.1576750 and arXiv:1007.183
From low-rank approximation to an efficient rational Krylov subspace method for the Lyapunov equation
We propose a new method for the approximate solution of the Lyapunov equation
with rank- right-hand side, which is based on extended rational Krylov
subspace approximation with adaptively computed shifts. The shift selection is
obtained from the connection between the Lyapunov equation, solution of systems
of linear ODEs and alternating least squares method for low-rank approximation.
The numerical experiments confirm the effectiveness of our approach.Comment: 17 pages, 1 figure
Low-rank updates and a divide-and-conquer method for linear matrix equations
Linear matrix equations, such as the Sylvester and Lyapunov equations, play
an important role in various applications, including the stability analysis and
dimensionality reduction of linear dynamical control systems and the solution
of partial differential equations. In this work, we present and analyze a new
algorithm, based on tensorized Krylov subspaces, for quickly updating the
solution of such a matrix equation when its coefficients undergo low-rank
changes. We demonstrate how our algorithm can be utilized to accelerate the
Newton method for solving continuous-time algebraic Riccati equations. Our
algorithm also forms the basis of a new divide-and-conquer approach for linear
matrix equations with coefficients that feature hierarchical low-rank
structure, such as HODLR, HSS, and banded matrices. Numerical experiments
demonstrate the advantages of divide-and-conquer over existing approaches, in
terms of computational time and memory consumption
Matrix-equation-based strategies for convection-diffusion equations
We are interested in the numerical solution of nonsymmetric linear systems
arising from the discretization of convection-diffusion partial differential
equations with separable coefficients and dominant convection. Preconditioners
based on the matrix equation formulation of the problem are proposed, which
naturally approximate the original discretized problem. For certain types of
convection coefficients, we show that the explicit solution of the matrix
equation can effectively replace the linear system solution. Numerical
experiments with data stemming from two and three dimensional problems are
reported, illustrating the potential of the proposed methodology
- …