206 research outputs found

    From low-rank approximation to an efficient rational Krylov subspace method for the Lyapunov equation

    Full text link
    We propose a new method for the approximate solution of the Lyapunov equation with rank-11 right-hand side, which is based on extended rational Krylov subspace approximation with adaptively computed shifts. The shift selection is obtained from the connection between the Lyapunov equation, solution of systems of linear ODEs and alternating least squares method for low-rank approximation. The numerical experiments confirm the effectiveness of our approach.Comment: 17 pages, 1 figure

    A black-box rational Arnoldi variant for Cauchy-Stieltjes matrix functions

    Get PDF
    Rational Arnoldi is a powerful method for approximating functions of large sparse matrices times a vector. The selection of asymptotically optimal parameters for this method is crucial for its fast convergence. We present and investigate a novel strategy for the automated parameter selection when the function to be approximated is of Cauchy-Stieltjes (or Markov) type, such as the matrix square root or the logarithm. The performance of this approach is demonstrated by numerical examples involving symmetric and nonsymmetric matrices. These examples suggest that our black-box method performs at least as well, and typically better, as the standard rational Arnoldi method with parameters being manually optimized for a given matrix

    Regularized methods via cubic subspace minimization for nonconvex optimization

    Full text link
    The main computational cost per iteration of adaptive cubic regularization methods for solving large-scale nonconvex problems is the computation of the step sks_k, which requires an approximate minimizer of the cubic model. We propose a new approach in which this minimizer is sought in a low dimensional subspace that, in contrast to classical approaches, is reused for a number of iterations. A regularized Newton step to correct sks_k is also incorporated whenever needed. We show that our method increases efficiency while preserving the worst-case complexity of classical cubic regularized methods. We also explore the use of rational Krylov subspaces for the subspace minimization, to overcome some of the issues encountered when using polynomial Krylov subspaces. We provide several experimental results illustrating the gains of the new approach when compared to classic implementations

    Rational Krylov for Stieltjes matrix functions: convergence and pole selection

    Full text link
    Evaluating the action of a matrix function on a vector, that is x=f(M)vx=f(\mathcal M)v, is an ubiquitous task in applications. When M\mathcal M is large, one usually relies on Krylov projection methods. In this paper, we provide effective choices for the poles of the rational Krylov method for approximating xx when f(z)f(z) is either Cauchy-Stieltjes or Laplace-Stieltjes (or, which is equivalent, completely monotonic) and M\mathcal M is a positive definite matrix. Relying on the same tools used to analyze the generic situation, we then focus on the case M=I⊗A−BT⊗I\mathcal M=I \otimes A - B^T \otimes I, and vv obtained vectorizing a low-rank matrix; this finds application, for instance, in solving fractional diffusion equation on two-dimensional tensor grids. We see how to leverage tensorized Krylov subspaces to exploit the Kronecker structure and we introduce an error analysis for the numerical approximation of xx. Pole selection strategies with explicit convergence bounds are given also in this case

    Matrix Equation Techniques for Certain Evolutionary Partial Differential Equations

    No full text
    We show that the discrete operator stemming from time-space discretization of evolutionary partial differential equations can be represented in terms of a single Sylvester matrix equation. A novel solution strategy that combines projection techniques with the full exploitation of the entry-wise structure of the involved coefficient matrices is proposed. The resulting scheme is able to efficiently solve problems with a tremendous number of degrees of freedom while maintaining a low storage demand as illustrated in several numerical examples
    • …
    corecore