4,189 research outputs found

    Convergence of scaled iterates by the Jacobi method

    Get PDF
    AbstractA quadratic convergence bound for scaled iterates by the serial Jacobi method for Hermitian positive definite matrices is derived. By scaled iterates we mean the matrices [diag(H(k))]−1/2H(k)[diag(H(k))]−1/2, where H(k), k⩾0, are matrices generated by the method. The bound is obtained in the general case of multiple eigenvalues. It depends on the minimum relative separation of the eigenvalues

    Novel Modifications of Parallel Jacobi Algorithms

    Get PDF
    We describe two main classes of one-sided trigonometric and hyperbolic Jacobi-type algorithms for computing eigenvalues and eigenvectors of Hermitian matrices. These types of algorithms exhibit significant advantages over many other eigenvalue algorithms. If the matrices permit, both types of algorithms compute the eigenvalues and eigenvectors with high relative accuracy. We present novel parallelization techniques for both trigonometric and hyperbolic classes of algorithms, as well as some new ideas on how pivoting in each cycle of the algorithm can improve the speed of the parallel one-sided algorithms. These parallelization approaches are applicable to both distributed-memory and shared-memory machines. The numerical testing performed indicates that the hyperbolic algorithms may be superior to the trigonometric ones, although, in theory, the latter seem more natural.Comment: Accepted for publication in Numerical Algorithm

    Structure of almost diagonal matrices

    Get PDF
    Classical and recent results on almost diagonal matrices are presented. These results measure the absolute and the relative distance between diagonal elements and the appropriate eigenvalues or singular values, and in case of multiple eigenvalues or singular values, reveal special structure in matrices. Simple MATLAB programs serve to illustrate how good the theoretical estimates are

    From Random Matrices to Stochastic Operators

    Full text link
    We propose that classical random matrix models are properly viewed as finite difference schemes for stochastic differential operators. Three particular stochastic operators commonly arise, each associated with a familiar class of local eigenvalue behavior. The stochastic Airy operator displays soft edge behavior, associated with the Airy kernel. The stochastic Bessel operator displays hard edge behavior, associated with the Bessel kernel. The article concludes with suggestions for a stochastic sine operator, which would display bulk behavior, associated with the sine kernel.Comment: 41 pages, 5 figures. Submitted to Journal of Statistical Physics. Changes in this revision: recomputed Monte Carlo simulations, added reference [19], fit into margins, performed minor editin

    Regularized Jacobi iteration for decentralized convex optimization with separable constraints

    Full text link
    We consider multi-agent, convex optimization programs subject to separable constraints, where the constraint function of each agent involves only its local decision vector, while the decision vectors of all agents are coupled via a common objective function. We focus on a regularized variant of the so called Jacobi algorithm for decentralized computation in such problems. We first consider the case where the objective function is quadratic, and provide a fixed-point theoretic analysis showing that the algorithm converges to a minimizer of the centralized problem. Moreover, we quantify the potential benefits of such an iterative scheme by comparing it against a scaled projected gradient algorithm. We then consider the general case and show that all limit points of the proposed iteration are optimal solutions of the centralized problem. The efficacy of the proposed algorithm is illustrated by applying it to the problem of optimal charging of electric vehicles, where, as opposed to earlier approaches, we show convergence to an optimal charging scheme for a finite, possibly large, number of vehicles

    Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure

    Full text link
    The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensor-based approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.Comment: 24 pages, 8 figure
    corecore