19 research outputs found

    A short note on a generalization of the Givens transformation

    Get PDF
    A new transformation, a generalization of the Givens rotation, is introduced here. Its properties are studied. This transformation has some free parameters, which can be chosen to attain pre-established conditions. Some special choices of those parameters are discussed, mainly to improve numerical properties of the transformation. © 2013 Elsevier Ltd. All rights reserved.A new transformation, a generalization of the Givens rotation, is introduced here. Its properties are studied. This transformation has some free parameters, which can be chosen to attain pre-established conditions. Some special choices of those parameters6615661CAPES - COORDENAÇÃO DE APERFEIÇOAMENTO DE PESSOAL DE NÍVEL SUPERIORCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOSEM INFORMAÇÃOSEM INFORMAÇÃOBjörck, A., (1996) Numerical Methods for Least Squares Problems, , SIAM PhiladelphiaGolub, G.H., Loan, C.V., (1996) Matrix Computation, , 3rd Edition The Johns Hopkins University Press Baltimore and LondonStewart, G.W., (1973) Introduction to Matrix Computations, , Academic Press New YorkStewart, G.W., (1998) Matrix Algorithms I: Basic Decompositions, , SIAM PhiladelphiaBai, Z., Demmel, J., Dongarra, J., Ruhe, A., Van Der Vorst, H., (2000) Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide, , SIAM PhiladelphiaGolub, G.H., (1999) Numerical Methods for Large Scale Eigenvalue Problems, Teaching Note, , Stanford UniversityParlette, B., (1997) The Symmetric Eigenvalue Problem, 20. , Reprinted as Classics in Applied Mathematics SIAM PhiladelphiaStewart, G.W., (2001) Matrix Algorithms II: Eigensystems, , SIAM PhiladelphiaGerck, E., D'Oliveira, A.B., Continued fraction calculation of the eigenvalues of tridiagonal matrices arising from the Schrödinger equation (1980) Journal of Computational and Applied Mathematics, 6, pp. 81-82Golub, G.H., Robertson, N.T., A generalized Bairstow algorithm (1967) Communication on Applied and Computational Mathematics, 10, pp. 371-373Im, Y., Ri, S., An algorithm for the calculation of eigenvalues of tridiagonal matrices using QD-transformations and the LR (RL) method (1995) Su-hak: Academy of Science of the People's Democratic Republic of Korea, 2, pp. 12-15Kulkarni, D., Schmidt, D., Tsui, S.K., Eigenvalues of tridiagonal pseudo-toeplitz matrices (1999) Linear Algebra and Its Applications, 297Pasquini, L., Pavani, R., Computing the eigenvalues of non-normal tridiagonal matrices (1995) Rendiconti Del Seminario Matematico e Fisico di Milano, 65, pp. 109-138Veselic, K., On real eigenvalues of real tridiagonal matrices (1979) Linear Algebra and Its Applications, 27, pp. 167-171Golub, G.H., Yuan, J.Y., Biloti, R., Ramos, J., Optimal generalized Householder transformation with application (2005) Tech. Rep., Universidade Federal Do Paraná, , BrazilLabudde, C.D., The reduction of an arbitrary real square matrix to tridiagonal form using similarity transformations (1963) Mathematics of Computation, 17, pp. 433-437Stathopolous, A., Saad, Y., Wu, K., Dynamic thick restarting of the Davidson, and the implicitly restarted Arnoldi methods (1998) SIAM Journal on Scientific Computing, 19, pp. 227-24

    Bayesian computations and efficient algorithms for computing functions of large, sparse matrices

    Get PDF
    The need for computing functions of large, sparse matrices arises in Bayesian spatial models where the computations using Gaussian Markov random fields require the evaluation of G -1 and G -1/2 for the precision matrix G and in the geostatistical approach where approximations of R -1 and R 1/2 are needed for the covariance matrix R . In both cases, good approximations to the desired matrix functions are required over a range of probable values of a vector v drawn randomly from a given population, as occurs in simulation techniques for finding posterior distributions such as Markov chain Monte Carlo. Consequently, it is preferable that the complete matrix function approximation be determined rather than for its action on a given v . The aim of this work is to find low degree polynomial approximations p( A ) such that e = ? f( A ) - p( A ) ? 2 is small in some sense on the spectral interval [a,b], where the extreme eigenvalues a and b are calculated using Krylov subspace approximation. Algorithms based on low order near-minimax polynomial approximations are proposed for the required matrix functions for a typical case study in computational Bayesian statistics, where a good balance between accuracy and computationally efficiency is achieved

    Computing and deflating eigenvalues while solving multiple right hand side linear systems in Quantum Chromodynamics

    Full text link
    We present a new algorithm that computes eigenvalues and eigenvectors of a Hermitian positive definite matrix while solving a linear system of equations with Conjugate Gradient (CG). Traditionally, all the CG iteration vectors could be saved and recombined through the eigenvectors of the tridiagonal projection matrix, which is equivalent theoretically to unrestarted Lanczos. Our algorithm capitalizes on the iteration vectors produced by CG to update only a small window of vectors that approximate the eigenvectors. While this window is restarted in a locally optimal way, the CG algorithm for the linear system is unaffected. Yet, in all our experiments, this small window converges to the required eigenvectors at a rate identical to unrestarted Lanczos. After the solution of the linear system, eigenvectors that have not accurately converged can be improved in an incremental fashion by solving additional linear systems. In this case, eigenvectors identified in earlier systems can be used to deflate, and thus accelerate, the convergence of subsequent systems. We have used this algorithm with excellent results in lattice QCD applications, where hundreds of right hand sides may be needed. Specifically, about 70 eigenvectors are obtained to full accuracy after solving 24 right hand sides. Deflating these from the large number of subsequent right hand sides removes the dreaded critical slowdown, where the conditioning of the matrix increases as the quark mass reaches a critical value. Our experiments show almost a constant number of iterations for our method, regardless of quark mass, and speedups of 8 over original CG for light quark masses.Comment: 22 pages, 26 eps figure

    Thick-Restart Lanczos Method for Electronic StructureCalculations

    Full text link
    corecore