880 research outputs found

    Preconditioned Lanczos Methods for the Minimum Eigenvalue of a Symmetric Positive Definite Toeplitz Matrix

    Get PDF
    In this paper, we apply the preconditioned Lanczos (PL) method to compute the minimum eigenvalue of a symmetric positive definite Toeplitz matrix. The sine transform-based preconditioner is used to speed up the convergence rate of the PL method. The resulting method involves only Toeplitz and sine transform matrix-vector multiplications and hence can be computed efficiently by fast transform algorithms. We show that if the symmetric Toeplitz matrix is generated by a positive 2π2 \pi-periodic even continuous function, then the PL method will converge sufficiently fast. Numerical results including Toeplitz and non-Toeplitz matrices are reported to illustrate the effectiveness of the method.published_or_final_versio

    Verified partial eigenvalue computations using contour integrals for Hermitian generalized eigenproblems

    Full text link
    We propose a verified computation method for partial eigenvalues of a Hermitian generalized eigenproblem. The block Sakurai-Sugiura Hankel method, a contour integral-type eigensolver, can reduce a given eigenproblem into a generalized eigenproblem of block Hankel matrices whose entries consist of complex moments. In this study, we evaluate all errors in computing the complex moments. We derive a truncation error bound of the quadrature. Then, we take numerical errors of the quadrature into account and rigorously enclose the entries of the block Hankel matrices. Each quadrature point gives rise to a linear system, and its structure enables us to develop an efficient technique to verify the approximate solution. Numerical experiments show that the proposed method outperforms a standard method and infer that the proposed method is potentially efficient in parallel.Comment: 15 pages, 4 figures, 1 tabl

    Accurate and Efficient Expression Evaluation and Linear Algebra

    Full text link
    We survey and unify recent results on the existence of accurate algorithms for evaluating multivariate polynomials, and more generally for accurate numerical linear algebra with structured matrices. By "accurate" we mean that the computed answer has relative error less than 1, i.e., has some correct leading digits. We also address efficiency, by which we mean algorithms that run in polynomial time in the size of the input. Our results will depend strongly on the model of arithmetic: Most of our results will use the so-called Traditional Model (TM). We give a set of necessary and sufficient conditions to decide whether a high accuracy algorithm exists in the TM, and describe progress toward a decision procedure that will take any problem and provide either a high accuracy algorithm or a proof that none exists. When no accurate algorithm exists in the TM, it is natural to extend the set of available accurate operations by a library of additional operations, such as x+y+zx+y+z, dot products, or indeed any enumerable set which could then be used to build further accurate algorithms. We show how our accurate algorithms and decision procedure for finding them extend to this case. Finally, we address other models of arithmetic, and the relationship between (im)possibility in the TM and (in)efficient algorithms operating on numbers represented as bit strings.Comment: 49 pages, 6 figures, 1 tabl

    A modified T. Chan’s preconditioner for Toeplitz systems

    Get PDF
    AbstractWe present a modified T. Chan’s preconditioner for solving Toeplitz linear systems by the preconditioned conjugate gradient (PCG) method in this paper. Especially, we give some results when the matrices are Hermitian positive definite Toeplitz matrices. The operation and convergence of the PCG method are discussed. Numerical examples presented illustrate the effectiveness of the preconditioner obtained

    An improved Newton iteration for the generalized inverse of a matrix, with applications

    Get PDF
    The purpose here is to clarify and illustrate the potential for the use of variants of Newton's method of solving problems of practical interest on highly personal computers. The authors show how to accelerate the method substantially and how to modify it successfully to cope with ill-conditioned matrices. The authors conclude that Newton's method can be of value for some interesting computations, especially in parallel and other computing environments in which matrix products are especially easy to work with

    The geometric mean algorithm

    Get PDF
    Bisection (of a real interval) is a well known algorithm to compute eigenvalues of symmetric matrices. Given an initial interval [a,b], convergence to an eigenvalue which has size much smaller than a or b may be made considerably faster if one replaces the usual arithmetic mean (of the end points of the current interval) with the geometric mean. Exploring this idea, we have implemented geometric bisection in a Matlab code. We illustrate the effectiveness of our algorithm in the context of the computation of the eigenvalues of a symmetric tridiagonal matrix which has a very large condition number.Fundação para a Ciência e a Tecnologia (FCT

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    corecore