133 research outputs found
Krylov subspace techniques for model reduction and the solution of linear matrix equations
This thesis focuses on the model reduction of linear systems and the solution of large
scale linear matrix equations using computationally efficient Krylov subspace techniques.
Most approaches for model reduction involve the computation and factorization of large
matrices. However Krylov subspace techniques have the advantage that they involve only
matrix-vector multiplications in the large dimension, which makes them a better choice
for model reduction of large scale systems. The standard Arnoldi/Lanczos algorithms are
well-used Krylov techniques that compute orthogonal bases to Krylov subspaces and, by
using a projection process on to the Krylov subspace, produce a reduced order model that
interpolates the actual system and its derivatives at infinity. An extension is the rational
Arnoldi/Lanczos algorithm which computes orthogonal bases to the union of Krylov
subspaces and results in a reduced order model that interpolates the actual system and
its derivatives at a predefined set of interpolation points. This thesis concentrates on the
rational Krylov method for model reduction.
In the rational Krylov method an important issue is the selection of interpolation points
for which various techniques are available in the literature with different selection criteria.
One of these techniques selects the interpolation points such that the approximation
satisfies the necessary conditions for H2 optimal approximation. However it is possible
to have more than one approximation for which the necessary optimality conditions are
satisfied. In this thesis, some conditions on the interpolation points are derived, that
enable us to compute all approximations that satisfy the necessary optimality conditions
and hence identify the global minimizer to the H2 optimal model reduction problem.
It is shown that for an H2 optimal approximation that interpolates at m interpolation
points, the interpolation points are the simultaneous solution of m multivariate polynomial
equations in m unknowns. This condition reduces to the computation of zeros of a
linear system, for a first order approximation. In case of second order approximation the
condition is to compute the simultaneous solution of two bivariate polynomial equations.
These two cases are analyzed in detail and it is shown that a global minimizer to the
H2 optimal model reduction problem can be identified. Furthermore, a computationally
efficient iterative algorithm is also proposed for the H2 optimal model reduction problem
that converges to a local minimizer.
In addition to the effect of interpolation points on the accuracy of the rational interpolating
approximation, an ordinary choice of interpolation points may result in a reduced
order model that loses the useful properties such as stability, passivity, minimum-phase and bounded real character as well as structure of the actual system. Recently in the
literature it is shown that the rational interpolating approximations can be parameterized
in terms of a free low dimensional parameter in order to preserve the stability of the
actual system in the reduced order approximation. This idea is extended in this thesis
to preserve other properties and combinations of them. Also the concept of parameterization
is applied to the minimal residual method, two-sided rational Arnoldi method
and H2 optimal approximation in order to improve the accuracy of the interpolating
approximation.
The rational Krylov method has also been used in the literature to compute low rank
approximate solutions of the Sylvester and Lyapunov equations, which are useful for
model reduction. The approach involves the computation of two set of basis vectors in
which each vector is orthogonalized with all previous vectors. This orthogonalization
becomes computationally expensive and requires high storage capacity as the number of
basis vectors increases. In this thesis, a restart scheme is proposed which restarts without
requiring that the new vectors are orthogonal to the previous vectors. Instead, a set of
two new orthogonal basis vectors are computed. This reduces the computational burden
of orthogonalization and the requirement of storage capacity. It is shown that in case
of Lyapunov equations, the approximate solution obtained through the restart scheme
approaches monotonically to the actual solution
Rational interpolation: Modified rational Arnoldi algorithm and Arnoldi-like equations
Published versio
Short-recurrence Krylov subspace methods for the overlap Dirac operator at nonzero chemical potential
The overlap operator in lattice QCD requires the computation of the sign
function of a matrix, which is non-Hermitian in the presence of a quark
chemical potential. In previous work we introduced an Arnoldi-based Krylov
subspace approximation, which uses long recurrences. Even after the deflation
of critical eigenvalues, the low efficiency of the method restricts its
application to small lattices. Here we propose new short-recurrence methods
which strongly enhance the efficiency of the computational method. Using
rational approximations to the sign function we introduce two variants, based
on the restarted Arnoldi process and on the two-sided Lanczos method,
respectively, which become very efficient when combined with multishift
solvers. Alternatively, in the variant based on the two-sided Lanczos method
the sign function can be evaluated directly. We present numerical results which
compare the efficiencies of a restarted Arnoldi-based method and the direct
two-sided Lanczos approximation for various lattice sizes. We also show that
our new methods gain substantially when combined with deflation.Comment: 14 pages, 4 figures; as published in Comput. Phys. Commun., modified
data in Figs. 2,3 and 4 for improved implementation of FOM algorithm,
extended discussion of the algorithmic cos
On Krylov projection methods and Tikhonov regularization
In the framework of large-scale linear discrete ill-posed problems, Krylov projection methods represent an essential tool since their development, which dates back to the early 1950\u2019s. In recent years, the use of these methods in a hybrid fashion or to solve Tikhonov regularized problems has received great attention especially for problems involving the restoration of digital images. In this paper we review the fundamental Krylov-Tikhonov techniques based on Lanczos bidiagonalization and the Arnoldi algorithms. Moreover, we study the use of the unsymmetric Lanczos process that, to the best of our knowledge, has just marginally been considered in this setting. Many numerical experiments and comparisons of different methods are presented
Arnoldi versus Nonsymmetric Lanczos Algorithms for Solving Nonsymmetric Matrix Eigenvalue Problems
We obtain several results which may be useful in determining the
convergence behavior of eigenvalue algorithms based upo n Arnoldi and
nonsymmetric Lanczos recursions. We derive a relationship between
nonsymmetric Lanczos eigenvalue procedures and Arnoldi eigenvalue
procedures. We demonstrate that the Arnoldi recursions preserve a
property which characterizes normal matrices, and that if we could
determine the appropriate starting vectors, we could mimic the
nonsymmetric Lanczos eigenvalue convergence on a general diagonalizable
matrix by its convergence on related normal matrices. Using a unitary
equivalence for each of these Krylov subspace methods, we define sets of
test problems where we can easily vary certain spectral properties of the
matrices. We use these and other test problems to examine the behavior of
an Arnoldi and of a nonsymmetric Lanczos procedure.
(Also cross-referenced as UMIACS-TR-95-123
- …