6 research outputs found

    7. Minisymposium on Gauss-type Quadrature Rules: Theory and Applications

    Get PDF

    Computing approximate extended Krylov subspaces without explicit inversion

    No full text
    Extended Krylov subspaces have been proven to be useful for many applications, like the approximation of matrix functions or the solution of matrix equations. It will be shown that extended Krylov subspaces —under some assumptions— can be retrieved without any explicit inversion or system solves involved. Instead we do the necessary computations of A^{-1} v in an implicit way using the information from an enlarged standard Krylov subspace.status: publishe

    Inversion free rational Krylov

    No full text
    It will be shown that extended Krylov subspaces —under some assumptions— can be computed approximately without any explicit inversion or system solves involved. Instead we do the necessary computations in an implicit way using the information from an enlarged standard Krylov subspace. For Krylov spaces the matrices capturing the recurrence coefficients of the orthogonal basis can be retrieved by projecting the original matrix on a particular orthogonal basis of the associated Krylov space. It is also well-known that for (extended) Krylov spaces of full dimension, this matrix can be obtained directly via similarity transformations on the original matrix. In this talk the iterative and the direct similarity approach are combined. First, an orthogonal basis of a large standard Krylov subspace is constructed iteratively. Second, cleverly chosen similarity transformations are executed to alter the matrix of recurrences, thereby also changing the orthogonal basis vectors spanning the large Krylov space. Finally, only a few of the new basis vectors are retained resulting in an orthogonal basis approximately spanning a chosen extended Krylov subspace K_{l,p}(A,v) = span {A^{-p+1}v, ... , A^{-1}v, v, Av, A^{2}v, ... , A^{l-1}v}. Numerical experiments will reveal advantages and disadvantages of this approach.status: publishe

    On rotations and approximate rational Krylov subspaces

    No full text
    Rational Krylov subspaces have been proven to be useful for many applications, like the approximation of matrix functions or the solution of matrix equations. These rational subspaces are built by not only matrix vector products but also by the inverses of a matrix times a vector. It’s these inverses that can result in significant faster convergence, but on the other hand they can as well create quite some problems. It will be shown that extended and rational Krylov subspaces —under some assumptions— can be retrieved without any explicit inversion or system solves involved. Instead we do the necessary computations in an implicit way using the information from an enlarged standard Krylov subspace. As such we can get rid of the problems, but there is a price to be paid. In this lecture the audience will be introduced to the generic building blocks underlying rational Krylov subspaces, which are, rotations, twisted QR-factorizations, turnovers, fusions, …. Building on these blocks, we will shrink a large Krylov subspace by unitary similarity transformations to a much smaller rational Krylov subspace without — if everything goes well— essential data loss. This smaller space can then be used to solve the original application without any problem. Numerical experiments support our claims that this approximation can be very good and thus can culminate in dimensionality reduction and as such also can lead to time savings when approximating, e.g., matrix functions or solving ODE’s.status: publishe

    On rotations and approximate rational Krylov subspaces

    No full text
    Rational Krylov subspaces have been proven to be useful for many applications, like the approximation of matrix functions or the solution of matrix equations. These rational subspaces are built by not only matrix vector products but also by the inverses of a matrix times a vector. It’s these inverses that can result in significant faster convergence, but on the other hand they can as well create quite some problems. It will be shown that extended and rational Krylov subspaces —under some assumptions— can be retrieved without any explicit inversion or system solves involved. Instead we do the necessary computations in an implicit way using the information from an enlarged standard Krylov subspace. As such we can get rid of the problems, but there is a price to be paid. In this lecture the audience will be introduced to the generic building blocks underlying rational Krylov subspaces, which are, rotations, twisted QR-factorizations, turnovers, fusions, .... Building on these blocks, we will shrink a large Krylov subspace by unitary similarity transformations to a much smaller rational Krylov subspace without — if everything goes well— essential data loss. This smaller space can then be used to solve the original application without any problem. Numerical experiments support our claims that this approximation can be very good and thus can culminate in dimensionality reduction and as such also can lead to time savings when approximating, e.g., matrix functions or solving ODE’s.status: publishe

    On rotations and approximate rational Krylov subspaces

    No full text
    Rational Krylov subspaces have been proven to be useful for many applications, like the approximation of matrix functions or the solution of matrix equations. These rational subspaces are built by not only matrix vector products but also by the inverses of a matrix times a vector. It’s these inverses that can result in significant faster convergence, but on the other hand they can as well create quite some problems. It will be shown that extended and rational Krylov subspaces —under some assumptions— can be retrieved without any explicit inversion or system solves involved. Instead we do the necessary computations in an implicit way using the information from an enlarged standard Krylov subspace. As such we can get rid of the problems, but there is a price to be paid. In this lecture the audience will be introduced to the generic building blocks underlying rational Krylov subspaces, which are, rotations, twisted QR-factorizations, turnovers, fusions, …. Building on these blocks, we will shrink a large Krylov subspace by unitary similarity transformations to a much smaller rational Krylov subspace without — if everything goes well— essential data loss. This smaller space can then be used to solve the original application without any problem. Numerical experiments support our claims that this approximation can be very good and thus can culminate in dimensionality reduction and as such also can lead to time savings when approximating, e.g., matrix functions or solving ODE’s.status: publishe
    corecore