46 research outputs found

    Preconditioning Lanczos approximations to the matrix exponential

    Get PDF

    Accelerated filtering on graphs using Lanczos method

    Get PDF
    Signal-processing on graphs has developed into a very active field of research during the last decade. In particular, the number of applications using frames constructed from graphs, like wavelets on graphs, has substantially increased. To attain scalability for large graphs, fast graph-signal filtering techniques are needed. In this contribution, we propose an accelerated algorithm based on the Lanczos method that adapts to the Laplacian spectrum without explicitly computing it. The result is an accurate, robust, scalable and efficient algorithm. Compared to existing methods based on Chebyshev polynomials, our solution achieves higher accuracy without increasing the overall complexity significantly. Furthermore, it is particularly well suited for graphs with large spectral gaps

    Explicit formulas for the exponentials of some special matrices

    Get PDF
    AbstractThe matrix exponential plays a very important role in many fields of mathematics and physics. It can be computed by many methods. This work is devoted to the study of some explicit formulas for computing eA, where A is a special square matrix. The main results are based on the convergent power series of eA. Examples and applications are given

    MATEX: A Distributed Framework for Transient Simulation of Power Distribution Networks

    Full text link
    We proposed MATEX, a distributed framework for transient simulation of power distribution networks (PDNs). MATEX utilizes matrix exponential kernel with Krylov subspace approximations to solve differential equations of linear circuit. First, the whole simulation task is divided into subtasks based on decompositions of current sources, in order to reduce the computational overheads. Then these subtasks are distributed to different computing nodes and processed in parallel. Within each node, after the matrix factorization at the beginning of simulation, the adaptive time stepping solver is performed without extra matrix re-factorizations. MATEX overcomes the stiff-ness hinder of previous matrix exponential-based circuit simulator by rational Krylov subspace method, which leads to larger step sizes with smaller dimensions of Krylov subspace bases and highly accelerates the whole computation. MATEX outperforms both traditional fixed and adaptive time stepping methods, e.g., achieving around 13X over the trapezoidal framework with fixed time step for the IBM power grid benchmarks.Comment: ACM/IEEE DAC 2014. arXiv admin note: substantial text overlap with arXiv:1505.0669

    A block Krylov subspace time-exact solution method for linear ODE systems

    Get PDF
    We propose a time-exact Krylov-subspace-based method for solving linear ODE (ordinary differential equation) systems of the form yā€²=āˆ’Ay+g(t)y'=-Ay + g(t) and yā€²ā€²=āˆ’Ay+g(t)y''=-Ay + g(t), where y(t)y(t) is the unknown function. The method consists of two stages. The first stage is an accurate piecewise polynomial approximation of the source term g(t)g(t), constructed with the help of the truncated SVD (singular value decomposition). The second stage is a special residual-based block Krylov subspace method. The accuracy of the method is only restricted by the accuracy of the piecewise polynomial approximation and by the error of the block Krylov process. Since both errors can, in principle, be made arbitrarily small, this yields, at some costs, a time-exact method. Numerical experiments are presented to demonstrate efficiency of the new method, as compared to an exponential time integrator with Krylov subspace matrix function evaluations

    Regularization of nonlinear ill-posed problems by exponential integrators

    Get PDF
    The numerical solution of ill-posed problems requires suitable regularization techniques. One possible option is to consider time integration methods to solve the Showalter differential equation numerically. The stopping time of the numerical integrator corresponds to the regularization parameter. A number of well-known regularization methods such as the Landweber iteration or the Levenberg-Marquardt method can be interpreted as variants of the Euler method for solving the Showalter differential equation. Motivated by an analysis of the regularization properties of the exact solution of this equation presented by [U.Ā Tautenhahn, Inverse Problems 10 (1994) 1405ā€“1418], we consider a variant of the exponential Euler method for solving the Showalter ordinary differential equation. We discuss a suitable discrepancy principle for selecting the step sizes within the numerical method and we review the convergence properties of [U.Ā Tautenhahn, Inverse Problems 10 (1994) 1405ā€“1418], and of our discrete version [M. Hochbruck et al., Technical Report (2008)]. Finally, we present numerical experiments which show that this method can be efficiently implemented by using Krylov subspace methods to approximate the product of a matrix function with a vector

    Residual, restarting and Richardson iteration for the matrix exponential

    Get PDF
    A well-known problem in computing some matrix functions iteratively is a lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Assume, the matrix exponential of a given matrix times a given vector has to be computed. We interpret the sought after vector as a value of a vector function satisfying the linear system of ordinary differential equations (ODE), whose coefficients form the given matrix. The residual is then defined with respect to the initial-value problem for this ODE system. The residual introduced in this way can be seen as a backward error. We show how the residual can efficiently be computed within several iterative methods for the matrix exponential. This completely resolves the question of reliable stopping criteria for these methods. Furthermore, we show that the residual concept can be used to construct new residual-based iterative methods. In particular, a variant of the Richardson method for the new residual appears to provide an efficient way to restart Krylov subspace methods for evaluating the matrix exponential.\u
    corecore