281 research outputs found
Accelerating the LSTRS Algorithm
In a recent paper [Rojas, Santos, Sorensen: ACM ToMS 34 (2008), Article 11] an efficient method for solvingthe Large-Scale Trust-Region Subproblem was suggested which is based on recasting it in terms of a parameter dependent eigenvalue problem and adjusting the parameter iteratively. The essential work at each iteration is the solution of an eigenvalue problem for the smallest eigenvalue of the Hessian matrix (or two smallest eigenvalues in the potential hard case) and associated eigenvector(s). Replacing the implicitly restarted Lanczos method in the original paper with the Nonlinear Arnoldi method makes it possible to recycle most of the work from previous iterations which can substantially accelerate LSTRS
Residual, restarting and Richardson iteration for the matrix exponential
A well-known problem in computing some matrix functions iteratively is a lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Assume, the matrix exponential of a given matrix times a given vector has to be computed. We interpret the sought after vector as a value of a vector function satisfying the linear system of ordinary differential equations (ODE), whose coefficients form the given matrix. The residual is then defined with respect to the initial-value problem for this ODE system. The residual introduced in this way can be seen as a backward error. We show how the residual can efficiently be computed within several iterative methods for the matrix exponential. This completely resolves the question of reliable stopping criteria for these methods. Furthermore, we show that the residual concept can be used to construct new residual-based iterative methods. In particular, a variant of the Richardson method for the new residual appears to provide an efficient way to restart Krylov subspace methods for evaluating the matrix exponential.\u
Localized Manifold Harmonics for Spectral Shape Analysis
The use of Laplacian eigenfunctions is ubiquitous in a wide range of computer graphics and geometry processing applications. In particular, Laplacian eigenbases allow generalizing the classical Fourier analysis to manifolds. A key drawback of such bases is their inherently global nature, as the Laplacian eigenfunctions carry geometric and topological structure of the entire manifold. In this paper, we introduce a new framework for local spectral shape analysis. We show how to efficiently construct localized orthogonal bases by solving an optimization problem that in turn can be posed as the eigendecomposition of a new operator obtained by a modification of the standard Laplacian. We study the theoretical and computational aspects of the proposed framework and showcase our new construction on the classical problems of shape approximation and correspondence. We obtain significant improvement compared to classical Laplacian eigenbases as well as other alternatives for constructing localized bases
The Lyapunov matrix equation. Matrix analysis from a computational perspective
Decay properties of the solution to the Lyapunov matrix equation are investigated. Their exploitation in the understanding of equation
matrix properties, and in the development of new numerical solution strategies
when is not low rank but possibly sparse is also briefly discussed.Comment: This work is a contribution to the Seminar series "Topics in
Mathematics", of the PhD Program of the Mathematics Department, Universit\`a
di Bologna, Ital
Computation of the von Neumann entropy of large matrices via trace estimators and rational Krylov methods
We consider the problem of approximating the von Neumann entropy of a large,
sparse, symmetric positive semidefinite matrix , defined as
where . After establishing some useful
properties of this matrix function, we consider the use of both polynomial and
rational Krylov subspace algorithms within two types of approximations methods,
namely, randomized trace estimators and probing techniques based on graph
colorings. We develop error bounds and heuristics which are employed in the
implementation of the algorithms. Numerical experiments on density matrices of
different types of networks illustrate the performance of the methods.Comment: 32 pages, 10 figure
- ā¦