210 research outputs found
On the ADI method for the Sylvester Equation and the optimal- points
The ADI iteration is closely related to the rational Krylov projection
methods for constructing low rank approximations to the solution of Sylvester
equation. In this paper we show that the ADI and rational Krylov approximations
are in fact equivalent when a special choice of shifts are employed in both
methods. We will call these shifts pseudo H2-optimal shifts. These shifts are
also optimal in the sense that for the Lyapunov equation, they yield a residual
which is orthogonal to the rational Krylov projection subspace. Via several
examples, we show that the pseudo H2-optimal shifts consistently yield nearly
optimal low rank approximations to the solutions of the Lyapunov equations
Rational Krylov for Stieltjes matrix functions: convergence and pole selection
Evaluating the action of a matrix function on a vector, that is , is an ubiquitous task in applications. When is large, one
usually relies on Krylov projection methods. In this paper, we provide
effective choices for the poles of the rational Krylov method for approximating
when is either Cauchy-Stieltjes or Laplace-Stieltjes (or, which is
equivalent, completely monotonic) and is a positive definite
matrix. Relying on the same tools used to analyze the generic situation, we
then focus on the case , and
obtained vectorizing a low-rank matrix; this finds application, for instance,
in solving fractional diffusion equation on two-dimensional tensor grids. We
see how to leverage tensorized Krylov subspaces to exploit the Kronecker
structure and we introduce an error analysis for the numerical approximation of
. Pole selection strategies with explicit convergence bounds are given also
in this case
Rational Krylov and ADI iteration for infinite size quasi-Toeplitz matrix equations
We consider a class of linear matrix equations involving semi-infinite
matrices which have a quasi-Toeplitz structure. These equations arise in
different settings, mostly connected with PDEs or the study of Markov chains
such as random walks on bidimensional lattices. We present the theory
justifying the existence in an appropriate Banach algebra which is
computationally treatable, and we propose several methods for their solutions.
We show how to adapt the ADI iteration to this particular infinite dimensional
setting, and how to construct rational Krylov methods. Convergence theory is
discussed, and numerical experiments validate the proposed approaches
Interpolatory methods for model reduction of multi-input/multi-output systems
We develop here a computationally effective approach for producing
high-quality -approximations to large scale linear
dynamical systems having multiple inputs and multiple outputs (MIMO). We extend
an approach for model reduction introduced by Flagg,
Beattie, and Gugercin for the single-input/single-output (SISO) setting, which
combined ideas originating in interpolatory -optimal model
reduction with complex Chebyshev approximation. Retaining this framework, our
approach to the MIMO problem has its principal computational cost dominated by
(sparse) linear solves, and so it can remain an effective strategy in many
large-scale settings. We are able to avoid computationally demanding
norm calculations that are normally required to monitor
progress within each optimization cycle through the use of "data-driven"
rational approximations that are built upon previously computed function
samples. Numerical examples are included that illustrate our approach. We
produce high fidelity reduced models having consistently better
performance than models produced via balanced truncation;
these models often are as good as (and occasionally better than) models
produced using optimal Hankel norm approximation as well. In all cases
considered, the method described here produces reduced models at far lower cost
than is possible with either balanced truncation or optimal Hankel norm
approximation
Optimality properties of Galerkin and Petrov-Galerkin methods for linear matrix equations
Galerkin and Petrov-Galerkin methods are some of the most successful solution
procedures in numerical analysis. Their popularity is mainly due to the
optimality properties of their approximate solution. We show that these
features carry over to the (Petrov-)Galerkin methods applied for the solution
of linear matrix equations. Some novel considerations about the use of Galerkin
and Petrov-Galerkin schemes in the numerical treatment of general linear matrix
equations are expounded and the use of constrained minimization techniques in
the Petrov-Galerkin framework is proposed
Optimality Properties of Galerkin and Petrov-Galerkin Methods for Linear Matrix Equations
none2siGalerkin and Petrov–Galerkin methods are some of the most successful solution procedures in numerical analysis. Their popularity is mainly due to the optimality properties of their approximate solution. We show that these features carry over to the (Petrov-) Galerkin methods applied for the solution of linear matrix equations. Some novel considerations about the use of Galerkin and Petrov–Galerkin schemes in the numerical treatment of general linear matrix equations are expounded and the use of constrained minimization techniques in the Petrov–Galerkin framework is proposed.nonePalitta D.; Simoncini V.Palitta D.; Simoncini V
- …