29 research outputs found

    An indefinite variant of LOBPCG for definite matrix pencils

    Get PDF
    In this paper, we propose a novel preconditioned solver for generalized Hermitian eigenvalue problems. More specifically, we address the case of a definite matrix pencil A − λ B AλBA-\lambda B , that is, A, B are Hermitian and there is a shift λ 0 λ0\lambda _{0} such that A − λ 0 B Aλ0BA-\lambda _{0} B is definite. Our new method can be seen as a variant of the popular LOBPCG method operating in an indefinite inner product. It also turns out to be a generalization of the recently proposed LOBP4DCG method by Bai and Li for solving product eigenvalue problems. Several numerical experiments demonstrate the effectiveness of our method for addressing certain product and quadratic eigenvalue problems

    Convergence Analysis of Extended LOBPCG for Computing Extreme Eigenvalues

    Full text link
    This paper is concerned with the convergence analysis of an extended variation of the locally optimal preconditioned conjugate gradient method (LOBPCG) for the extreme eigenvalue of a Hermitian matrix polynomial which admits some extended form of Rayleigh quotient. This work is a generalization of the analysis by Ovtchinnikov (SIAM J. Numer. Anal., 46(5):2567-2592, 2008). As instances, the algorithms for definite matrix pairs and hyperbolic quadratic matrix polynomials are shown to be globally convergent and to have an asymptotically local convergence rate. Also, numerical examples are given to illustrate the convergence.Comment: 21 pages, 2 figure

    Restarted Q-Arnoldi-type methods exploiting symmetry in quadratic eigenvalue problems

    Full text link
    The final publication is available at Springer via http://dx.doi.org/ 10.1007/s10543-016-0601-5.We investigate how to adapt the Q-Arnoldi method for the case of symmetric quadratic eigenvalue problems, that is, we are interested in computing a few eigenpairs of with M, C, K symmetric matrices. This problem has no particular structure, in the sense that eigenvalues can be complex or even defective. Still, symmetry of the matrices can be exploited to some extent. For this, we perform a symmetric linearization , where A, B are symmetric matrices but the pair (A, B) is indefinite and hence standard Lanczos methods are not applicable. We implement a symmetric-indefinite Lanczos method and enrich it with a thick-restart technique. This method uses pseudo inner products induced by matrix B for the orthogonalization of vectors (indefinite Gram-Schmidt). The projected problem is also an indefinite matrix pair. The next step is to write a specialized, memory-efficient version that exploits the block structure of A and B, referring only to the original problem matrices M, C, K as in the Q-Arnoldi method. This results in what we have called the Q-Lanczos method. Furthermore, we define a stabilized variant analog of the TOAR method. We show results obtained with parallel implementations in SLEPc.This work was supported by the Spanish Ministry of Economy and Competitiveness under Grant TIN2013-41049-P. Carmen Campos was supported by the Spanish Ministry of Education, Culture and Sport through an FPU Grant with reference AP2012-0608.Campos, C.; Román Moltó, JE. (2016). Restarted Q-Arnoldi-type methods exploiting symmetry in quadratic eigenvalue problems. BIT Numerical Mathematics. 56(4):1213-1236. https://doi.org/10.1007/s10543-016-0601-5S12131236564Bai, Z., Su, Y.: SOAR: a second-order Arnoldi method for the solution of the quadratic eigenvalue problem. SIAM J. Matrix Anal. Appl. 26(3), 640–659 (2005)Bai, Z., Day, D., Ye, Q.: ABLE: an adaptive block Lanczos method for non-Hermitian eigenvalue problems. SIAM J. Matrix Anal. Appl. 20(4), 1060–1082 (1999)Bai, Z., Ericsson, T., Kowalski, T.: Symmetric indefinite Lanczos method. In: Bai, Z., Demmel, J., Dongarra, J., Ruhe, A., van der Vorst, H. (eds.) Templates for the solution of algebraic eigenvalue problems: a practical guide, pp. 249–260. Society for Industrial and Applied Mathematics, Philadelphia (2000)Balay, S., Abhyankar, S., Adams, M., Brown, J., Brune, P., Buschelman, K., Dalcin, L., Eijkhout, V., Gropp, W., Kaushik, D., Knepley, M., McInnes, L.C., Rupp, K., Smith, B., Zampini, S., Zhang, H.: PETSc users manual. Tech. Rep. ANL-95/11 - Revision 3.6, Argonne National Laboratory (2015)Benner, P., Faßbender, H., Stoll, M.: Solving large-scale quadratic eigenvalue problems with Hamiltonian eigenstructure using a structure-preserving Krylov subspace method. Electron. Trans. Numer. Anal. 29, 212–229 (2008)Betcke, T., Higham, N.J., Mehrmann, V., Schröder, C., Tisseur, F.: NLEVP: a collection of nonlinear eigenvalue problems. ACM Trans. Math. Softw. 39(2), 7:1–7:28 (2013)Campos, C., Roman, J.E.: Parallel Krylov solvers for the polynomial eigenvalue problem in SLEPc (2015, submitted)Day, D.: An efficient implementation of the nonsymmetric Lanczos algorithm. SIAM J. Matrix Anal. Appl. 18(3), 566–589 (1997)Hernandez, V., Roman, J.E., Vidal, V.: SLEPc: a scalable and flexible toolkit for the solution of eigenvalue problems. ACM Trans. Math. Softw. 31(3), 351–362 (2005)Hernandez, V., Roman, J.E., Tomas, A.: Parallel Arnoldi eigensolvers with enhanced scalability via global communications rearrangement. Parallel Comput. 33(7–8), 521–540 (2007)Jia, Z., Sun, Y.: A refined variant of SHIRA for the skew-Hamiltonian/Hamiltonian (SHH) pencil eigenvalue problem. Taiwan J. Math. 17(1), 259–274 (2013)Kressner, D., Roman, J.E.: Memory-efficient Arnoldi algorithms for linearizations of matrix polynomials in Chebyshev basis. Numer. Linear Algebra Appl. 21(4), 569–588 (2014)Kressner, D., Pandur, M.M., Shao, M.: An indefinite variant of LOBPCG for definite matrix pencils. Numer. Algorithms 66(4), 681–703 (2014)Lancaster, P.: Linearization of regular matrix polynomials. Electron. J. Linear Algebra 17, 21–27 (2008)Lancaster, P., Ye, Q.: Rayleigh-Ritz and Lanczos methods for symmetric matrix pencils. Linear Algebra Appl. 185, 173–201 (1993)Lu, D., Su, Y.: Two-level orthogonal Arnoldi process for the solution of quadratic eigenvalue problems (2012, manuscript)Meerbergen, K.: The Lanczos method with semi-definite inner product. BIT 41(5), 1069–1078 (2001)Meerbergen, K.: The Quadratic Arnoldi method for the solution of the quadratic eigenvalue problem. SIAM J. Matrix Anal. Appl. 30(4), 1463–1482 (2008)Mehrmann, V., Watkins, D.: Structure-preserving methods for computing eigenpairs of large sparse skew-Hamiltonian/Hamiltonian pencils. SIAM J. Sci. Comput. 22(6), 1905–1925 (2001)Parlett, B.N.: The symmetric Eigenvalue problem. Prentice-Hall, Englewood Cliffs (1980) (reissued with revisions by SIAM, Philadelphia)Parlett, B.N., Chen, H.C.: Use of indefinite pencils for computing damped natural modes. Linear Algebra Appl. 140(1), 53–88 (1990)Parlett, B.N., Taylor, D.R., Liu, Z.A.: A look-ahead Lánczos algorithm for unsymmetric matrices. Math. Comput. 44(169), 105–124 (1985)de Samblanx, G., Bultheel, A.: Nested Lanczos: implicitly restarting an unsymmetric Lanczos algorithm. Numer. Algorithms 18(1), 31–50 (1998)Sleijpen, G.L.G., Booten, A.G.L., Fokkema, D.R., van der Vorst, H.A.: Jacobi-Davidson type methods for generalized eigenproblems and polynomial eigenproblems. BIT 36(3), 595–633 (1996)Stewart, G.W.: A Krylov-Schur algorithm for large eigenproblems. SIAM J. Matrix Anal. Appl. 23(3), 601–614 (2001)Su, Y., Zhang, J., Bai, Z.: A compact Arnoldi algorithm for polynomial eigenvalue problems. In: Presented at RANMEP (2008)Tisseur, F.: Tridiagonal-diagonal reduction of symmetric indefinite pairs. SIAM J. Matrix Anal. Appl. 26(1), 215–232 (2004)Tisseur, F., Meerbergen, K.: The quadratic eigenvalue problem. SIAM Rev. 43(2), 235–286 (2001)Watkins, D.S.: The matrix Eigenvalue problem: GR and Krylov subspace methods. Society for Industrial and Applied Mathematics (2007)Wu, K., Simon, H.: Thick-restart Lanczos method for large symmetric eigenvalue problems. SIAM J. Matrix Anal. Appl. 22(2), 602–616 (2000

    A parallel implementation of Davidson methods for large-scale eigenvalue problems in SLEPc

    Full text link
    In the context of large-scale eigenvalue problems, methods of Davidson type such as Jacobi-Davidson can be competitive with respect to other types of algorithms, especially in some particularly difficult situations such as computing interior eigenvalues or when matrix factorization is prohibitive or highly inefficient. However, these types of methods are not generally available in the form of high-quality parallel implementations, especially for the case of non-Hermitian eigenproblems. We present our implementation of various Davidson-type methods in SLEPc, the Scalable Library for Eigenvalue Problem Computations. The solvers incorporate many algorithmic variants for subspace expansion and extraction, and cover a wide range of eigenproblems including standard and generalized, Hermitian and non-Hermitian, with either real or complex arithmetic. We provide performance results on a large battery of test problems.This work was supported by the Spanish Ministerio de Ciencia e Innovacion under project TIN2009-07519. Author's addresses: E. Romero, Institut I3M, Universitat Politecnica de Valencia, Cami de Vera s/n, 46022 Valencia, Spain), and J. E. Roman, Departament de Sistemes Informatics i Computacio, Universitat Politecnica de Valencia, Cami de Vera s/n, 46022 Valencia, Spain; email: [email protected] Alcalde, E.; Román Moltó, JE. (2014). A parallel implementation of Davidson methods for large-scale eigenvalue problems in SLEPc. ACM Transactions on Mathematical Software. 40(2):13:01-13:29. https://doi.org/10.1145/2543696S13:0113:29402P. Arbenz, M. Becka, R. Geus, U. Hetmaniuk, and T. Mengotti. 2006. On a parallel multilevel preconditioned Maxwell eigensolver. Parallel Comput. 32, 2, 157--165.Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, Eds. 2000. Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. SIAM, Philadelphia, PA.C. G. Baker, U. L. Hetmaniuk, R. B. Lehoucq, and H. K. Thornquist. 2009. Anasazi software for the numerical solution of large-scale eigenvalue problems. ACM Trans. Math. Softw. 36, 3, 13:1--13:23.S. Balay, J. Brown, K. Buschelman, V. Eijkhout, W. Gropp, D. Kaushik, M. Knepley, L. C. McInnes, B. Smith, and H. Zhang. 2011. PETSc users manual. Tech. Rep. ANL-95/11-Revision 3.2, Argonne National Laboratory.S. Balay, W. D. Gropp, L. C. McInnes, and B. F. Smith. 1997. Efficient management of parallelism in object oriented numerical software libraries. In Modern Software Tools in Scientific Computing, E. Arge, A. M. Bruaset, and H. P. Langtangen, Eds., Birkhaüser, 163--202.M. A. Brebner and J. Grad. 1982. Eigenvalues of Ax =λ Bx for real symmetric matrices A and B computed by reduction to a pseudosymmetric form and the HR process. Linear Algebra Appl. 43, 99--118.C. Campos, J. E. Roman, E. Romero, and A. Tomas. 2011. SLEPc users manual. Tech. Rep. DSICII/24/02 - Revision 3.2, D. Sistemes Informàtics i Computació, Universitat Politècnica de València. http://www.grycap.upv.es/slepc.T. Dannert and F. Jenko. 2005. Gyrokinetic simulation of collisionless trapped-electronmode turbulence. Phys. Plasmas 12, 7, 072309.E. R. Davidson. 1975. The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices. J. Comput. Phys. 17, 1, 87--94.T. A. Davis and Y. Hu. 2011. The University of Florida Sparse Matrix Collection. ACM Trans. Math. Softw. 38, 1, 1:1--1:25.H. C. Elman, A. Ramage, and D. J. Silvester. 2007. Algorithm 866: IFISS, a Matlab toolbox for modelling incompressible flow. ACM Trans. Math. Softw. 33, 2. Article 14.T. Ericsson and A. Ruhe. 1980. The spectral transformation Lanczos method for the numerical solution of large sparse generalized symmetric eigenvalue problems. Math. Comp. 35, 152, 1251--1268.M. Ferronato, C. Janna, and G. Pini. 2012. Efficient parallel solution to large-size sparse eigenproblems with block FSAI preconditioning. Numer. Linear Algebra Appl. 19, 5, 797--815.D. R. Fokkema, G. L. G. Sleijpen, and H. A. van der Vorst. 1998. Jacobi--Davidson style QR and QZ algorithms for the reduction of matrix pencils. SIAM J. Sci. Comput. 20, 1, 94--125.M. A. Freitag and A. Spence. 2007. Convergence theory for inexact inverse iteration applied to the generalised nonsymmetric eigenproblem. Electron. Trans. Numer. Anal. 28, 40--64.M. Genseberger. 2010. Improving the parallel performance of a domain decomposition preconditioning technique in the Jacobi-Davidson method for large scale eigenvalue problems. App. Numer. Math. 60, 11, 1083--1099.V. Hernandez, J. E. Roman, and A. Tomas. 2007. Parallel Arnoldi eigensolvers with enhanced scalability via global communications rearrangement. Parallel Comput. 33, 7--8, 521--540.V. Hernandez, J. E. Roman, and V. Vidal. 2005. SLEPc: A scalable and flexible toolkit for the solution of eigenvalue problems. ACM Trans. Math. Softw. 31, 3, 351--362.V. Heuveline, B. Philippe, and M. Sadkane. 1997. Parallel computation of spectral portrait of large matrices by Davidson type methods. Numer. Algor. 16, 1, 55--75.M. E. Hochstenbach. 2005a. Generalizations of harmonic and refined Rayleigh-Ritz. Electron. Trans. Numer. Anal. 20, 235--252.M. E. Hochstenbach. 2005b. Variations on harmonic Rayleigh--Ritz for standard and generalized eigenproblems. Preprint, Department of Mathematics, Case Western Reserve University.M. E. Hochstenbach and Y. Notay. 2006. The Jacobi--Davidson method. GAMM Mitt. 29, 2, 368--382.F.-N. Hwang, Z.-H. Wei, T.-M. Huang, and W. Wang. 2010. A parallel additive Schwarz preconditioned Jacobi-Davidson algorithm for polynomial eigenvalue problems in quantum dot simulation. J. Comput. Phys. 229, 8, 2932--2947.A. V. Knyazev. 2001. Toward the optimal preconditioned eigensolver: Locally optimal block preconditioned conjugate gradient method. SIAM J. Sci. Comput. 23, 2, 517--541.A. V. Knyazev, M. E. Argentati, I. Lashuk, and E. E. Ovtchinnikov. 2007. Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in HYPRE and PETSc. SIAM J. Sci. Comput. 29, 5, 2224--2239.J. Kopal, M. Rozložník, M. Tuma, and A. Smoktunowicz. 2012. Rounding error analysis of orthogonalization with a non-standard inner product. Numer. Math. 52, 4, 1035--1058.D. Kressner. 2006. Block algorithms for reordering standard and generalized Schur forms. ACM Trans. Math. Softw. 32, 4, 521--532.R. B. Lehoucq, D. C. Sorensen, and C. Yang. 1998. ARPACK Users' Guide, Solution of Large-Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods. SIAM, Philadelphia, PA.Z. Li, Y. Saad, and M. Sosonkina. 2003. pARMS: a parallel version of the algebraic recursive multilevel solver. Numer. Linear Algebra Appl. 10, 5--6, 485--509.J. R. McCombs and A. Stathopoulos. 2006. Iterative validation of eigensolvers: a scheme for improving the reliability of Hermitian eigenvalue solvers. SIAM J. Sci. Comput. 28, 6, 2337--2358.F. Merz, C. Kowitz, E. Romero, J. E. Roman, and F. Jenko. 2012. Multi-dimensional gyrokinetic parameter studies based on eigenvalues computations. Comput. Phys. Commun. 183, 4, 922--930.R. B. Morgan. 1990. Davidson's method and preconditioning for generalized eigenvalue problems. J. Comput. Phys. 89, 241--245.R. B. Morgan. 1991. Computing interior eigenvalues of large matrices. Linear Algebra Appl. 154--156, 289--309.R. B. Morgan and D. S. Scott. 1986. Generalizations of Davidson's method for computing eigenvalues of sparse symmetric matrices. SIAM J. Sci. Statist. Comput. 7, 3, 817--825.R. Natarajan and D. Vanderbilt. 1989. A new iterative scheme for obtaining eigenvectors of large, real-symmetric matrices. J. Comput. Phys. 82, 1, 218--228.M. Nool and A. van der Ploeg. 2000. A parallel Jacobi--Davidson-type method for solving large generalized eigenvalue problems in magnetohydrodynamics. SIAM J. Sci. Comput. 22, 1, 95--112.J. Olsen, P. Jørgensen, and J. Simons. 1990. Passing the one-billion limit in full configuration-interaction (FCI) calculations. Chem. Phys. Lett. 169, 6, 463--472.C. C. Paige, B. N. Parlett, and H. A. van der Vorst. 1995. Approximate solutions and eigenvalue bounds from Krylov subspaces. Numer. Linear Algebra Appl. 2, 2, 115--133.E. Romero and J. E. Roman. 2011. Computing subdominant unstable modes of turbulent plasma with a parallel Jacobi--Davidson eigensolver. Concur. Comput.: Pract. Exp. 23, 17, 2179--2191.Y. Saad. 1993. A flexible inner-outer preconditioned GMRES algorithm. SIAM J. Sci. Comput. 14, 2, 461--469.G. L. G. Sleijpen, A. G. L. Booten, D. R. Fokkema, and H. A. van der Vorst. 1996. Jacobi-Davidson type methods for generalized eigenproblems and polynomial eigenproblems. BIT 36, 3, 595--633.G. L. G. Sleijpen and H. A. van der Vorst. 1996. A Jacobi--Davidson iteration method for linear eigenvalue problems. SIAM J. Matrix Anal. Appl. 17, 2, 401--425.G. L. G. Sleijpen and H. A. van der Vorst. 2000. A Jacobi--Davidson iteration method for linear eigenvalue problems. SIAM Rev. 42, 2, 267--293.G. L. G. Sleijpen, H. A. van der Vorst, and E. Meijerink. 1998. Efficient expansion of subspaces in the Jacobi--Davidson method for standard and generalized eigenproblems. Electron. Trans. Numer. Anal. 7, 75--89.A. Stathopoulos. 2007. Nearly optimal preconditioned methods for Hermitian eigenproblems under limited memory. Part I: Seeking one eigenvalue. SIAM J. Sci. Comput. 29, 2, 481--514.A. Stathopoulos and J. R. McCombs. 2007. Nearly optimal preconditioned methods for Hermitian eigenproblems under limited memory. Part II: Seeking many eigenvalues. SIAM J. Sci. Comput. 29, 5, 2162--2188.A. Stathopoulos and J. R. McCombs. 2010. PRIMME: PReconditioned Iterative MultiMethod Eigensolver: Methods and software description. ACM Trans. Math. Softw. 37, 2, 21:1--21:30.A. Stathopoulos and Y. Saad. 1998. Restarting techniques for the (Jacobi-)Davidson symmetric eigenvalue methods. Electron. Trans. Numer. Anal. 7, 163--181.A. Stathopoulos, Y. Saad, and C. F. Fischer. 1995. Robust preconditioning of large, sparse, symmetric eigenvalue problems. J. Comput. Appl. Math. 64, 3, 197--215.A. Stathopoulos, Y. Saad, and K. Wu. 1998. Dynamic thick restarting of the Davidson, and the implicitly restarted Arnoldi methods. SIAM J. Sci. Comput. 19, 1, 227--245.G. W. Stewart. 2001. Matrix Algorithms. Volume II: Eigensystems. SIAM, Philadelphia, PA.H. A. van der Vorst. 2002. Computational methods for large eigenvalue problems. In Handbook of Numerical Analysis, P. G. Ciarlet and J. L. Lions, Eds., Vol. VIII, Elsevier, 3--179.H. A. van der Vorst. 2004. Modern methods for the iterative computation of eigenpairs of matrices of high dimension. Z. Angew. Math. Mech. 84, 7, 444--451.T. van Noorden and J. Rommes 2007. Computing a partial generalized real Schur form using the Jacobi--Davidson method. Numer. Linear Algebra Appl. 14, 3, 197--215.T. D. Young, E. Romero, and J. E. Roman. 2013. Parallel finite element density functional computations exploiting grid refinement and subspace recycling. Comput. Phys. Commun. 184, 1, 66--72

    Computing interior eigenvalues and corresponding eigenvectors of definite matrix pairs

    Get PDF
    U prvom dijelu ove disertacije predstavljamo nove algoritme koji za dani hermitski matrični par (A,B)(A, B) ispituju je li on pozitivno definitan, u smislu da postoji realan broj λ0\lambda_0 takav da je matrica Aλ0BA-\lambda_0B pozitivno definitna. Skup svih takvih λ0\lambda_0 čini otvoreni interval koji zovemo definitan interval, a bilo koji takav λ0\lambda_0 zovemo definitan pomak. Najjednostavniji algoritmi ispitivanja koje predlažemo temelje se na ispitivanju glavnih podmatrica reda 1 ili 2. Također razvijamo efikasniji algoritam ispitivanja potprostora uz pretpostavku indefinitnosti matrice B. Taj se algoritam temelji na iterativnom ispitivanju malih gusto popunjenih komprimiranih parova koji nastaju korištenjem test-potprostora malih dimenzija, a predlažemo i ubrzanje samog algoritma. Algoritam ispitivanja potprostora posebno je pogodan za velike rijetko popunjene vrpčaste matrične parove, a može se primijeniti u ispitivanju hiperbolnosti kvadratnog svojstvenog problema. U drugom dijelu ove disertacije za dani pozitivno definitni matrični par (A,B)(A, B) reda nn s indefinitnom matricom BB konstruiramo nove algoritme minimizacije traga funkcije f(X)=XHAXf(X)=X^HAX uz uvjet XHBX=diag(Ik+,Ik)X^HBX=diag(I_{k_+}, -I_{k_-}) gdje je XCn×(k++k),1k+n+,1knX \in \mathbb{C}^{n \times (k_++k_-)}, 1 \leq k_+ \leq n_+, 1 \leq k_- \leq n_- i (n+,n,n0)(n_+, n_-, n_0) inercija matrice BB. Predlažemo opći indefinitni algoritam, te razvijamo efikasne algoritme prekondicioniranih gradijentnih iteracija koje smo nazvali indefinitna mm-shema. Stoga metode indefinitne mm-sheme za dani pozitivno definitni par i jedan ili dva definitna pomaka (koji se mogu dobiti algoritmom ispitivanja potprostora) istovremeno računaju manji broj unutarnjih svojstvenih vrijednosti oko definitnog intervala i pridružene svojstvene vektore. Također, dajemo ideje kako računati manji broj svojstvenih vrijednosti oko bilo kojeg broja unutar rubova spektra, a izvan definitnog intervala, i pridruženih svojstvenih vektora, danog pozitivno definitnog matričnog para koristeći pozitivno definitnu matricu prekondicioniranja. Algoritmi su posebno pogodni za velike rijetko popunjene matrične parove. Nizom numeričkih eksperimenata pokazujemo efikasnost samih algoritama ispitivanja i algoritama računanja unutarnjih svojstvenih vrijednosti i pridruženih svojstvenih vektora. Efikasnost naših metoda uspoređujemo s nekim postojećim metodama.The generalized eigenvalue problem (GEP) for given matrices A,BCn×nA, B \in \mathbb{C}^{n \times n} is to find scalars λ\lambda and nonzero vectors xCnx \in \mathbb{C}^n such that Ax=λBxAx = \lambda Bx (1). The pair (λ,x)(\lambda, x) is called an eigenpair, λ\lambda is an eigenvalue and xx corresponding eigenvector. GEP (1) where A and B are both Hermitian, or real symmetric, occurs in many applications of mathematics. Very important case is when B (and A) is positive definite (appearing, e.g., in the finite element discretization of self-adjoint and elliptic PDE-eigenvalue problem [25]). Another very important case is when B (and A) is indefinite, but the matrix pair (A, B) is definite, meaning, there exist real numbers α,β\alpha, \beta such that the matrix αA+βB\alpha A + \beta B is positive definite (appearing, e.g., in mechanics [83] and computational quantum chemistry [4]). Many theoretical properties (variational principles, perturbation theory, etc.) and eigenvalue solvers for Hermitian matrix are extended to definite matrix pairs [64, 79, 83]. A Hermitian matrix pair (A, B) is called positive (negative) definite if there exists a real λ0\lambda_0 such that Aλ0BA- \lambda_0 B is positive ( negative) definite. The set of all such λ0\lambda_0 is an open interval called the definiteness interval [83], and any such λ0\lambda_0 will be called definitizing shift. In the first part of this thesis we propose new algorithms for detecting definite Hermitian matrix pairs (A, B). The most simple algorithms we propose are based on testing the main submatrices of order 1 or 2. These algorithms do not have to give a final answer about (in)definiteness of the given pair, so we develop a more efficient subspace algorithm assuming B is indefinite. Our subspace algorithm for detecting definiteness is based on iterative testing of small full compressed matrix pairs formed using test-subspaces of small dimensions. It is generalization of the method of coordinate relaxation proposed in [36, Section 3.6]. We also propose acceleration of the subspace algorithm in a way that certain linear systems must be solved in every or in some iteration steps. If the matrix pair is definite, the subspace algorithm detects if it is positive or negative definite and returns one definitizing shift. The subspace algorithm is particulary suited for large, sparse and banded matrix pairs, and can be used in testing hyperbolicity of a Hermitian quadratic matrix polynomial. Numerical experiments are given which illustrate efficiency of several variants of our subspace algorithm and comparison is made with an arc algorithm [19, 17, 29]. In the second part of this thesis we are interested in solving partial positive definite GEP (1) where B (and A) is indefinite (both A and B can be singular). Specifically, we are interested in iterative algorithms which will compute a small number of eigenvalues closest to the definiteness interval and corresponding eigenvectors. These algorithms are based on trace minimization property [41, 49]: find the minimum of the trace of the function: f(X)=XHAXf(X)=X^HAX such that XHBX=diag(Ik+,Ik)X^HBX=diag(I_{k_+}, -I_{k_-}) where XCn×(k++k),1k+n+,1knX \in \mathbb{C}^{n \times (k_++k_-)}, 1 \leq k_+ \leq n_+, 1 \leq k_- \leq n_- and (n+,n,n0)(n_+, n_-, n_0) is the inertia of B. The class of algorithms we propose will be preconditioned gradient type iteration, suitable for large and sparse matrices, previously studied for the case with A and/or B are known to be positive definite (for a survey of preconditioned iterations see [3, 39]). In the recent paper [42] an indefinite variants of LOBPCG algorithm [40] were suggested. The authors of [42] were not aware of any other preconditioned eigenvalue solver tailored to definite matrix pairs with indefinite matrices. In this thesis we propose some new preconditioned eigenvalue solvers suitable for this case, which include truncated and extended versions of indefinite LOBPCG from [42]. Our algorithms use one or two definitizing shifts. For the truncated versions of indefinite LOBPCG, which we call indefinite BPSD/A, we derive a sharp convergence estimates. Since for the LOBPCG type algorithms there are still no sharp convergence estimates, estimates derived for BPSD/A type methods serve as an upper (non-sharp) convergence estimates. We also devise some possibilities of using our algorithms to compute a modest number of eigenvalues around any spectral gap of a definite matrix pair (A, B). Numerical experiments are given which illustrate efficiency and some limitations of our algorithms

    GENERALIZATIONS OF AN INVERSE FREE KRYLOV SUBSPACE METHOD FOR THE SYMMETRIC GENERALIZED EIGENVALUE PROBLEM

    Get PDF
    Symmetric generalized eigenvalue problems arise in many physical applications and frequently only a few of the eigenpairs are of interest. Typically, the problems are large and sparse, and therefore traditional methods such as the QZ algorithm may not be considered. Moreover, it may be impractical to apply shift-and-invert Lanczos, a favored method for problems of this type, due to difficulties in applying the inverse of the shifted matrix. With these difficulties in mind, Golub and Ye developed an inverse free Krylov subspace algorithm for the symmetric generalized eigenvalue problem. This method does not rely on shift-and-invert transformations for convergence acceleration, but rather a preconditioner is used. The algorithm suffers, however, in the presence of multiple or clustered eigenvalues. Also, it is only applicable to the location of extreme eigenvalues. In this work, we extend the method of Golub and Ye by developing a block generalization of their algorithm which enjoys considerably faster convergence than the usual method in the presence of multiplicities and clusters. Preconditioning techniques for the problems are discussed at length, and some insight is given into how these preconditioners accelerate the method. Finally we discuss a transformation which can be applied so that the algorithm extracts interior eigenvalues. A preconditioner based on a QR factorization with respect to the B-1 inner product is developed and applied in locating interior eigenvalues

    Computing interior eigenvalues and corresponding eigenvectors of definite matrix pairs

    Get PDF
    U prvom dijelu ove disertacije predstavljamo nove algoritme koji za dani hermitski matrični par (A,B)(A, B) ispituju je li on pozitivno definitan, u smislu da postoji realan broj λ0\lambda_0 takav da je matrica Aλ0BA-\lambda_0B pozitivno definitna. Skup svih takvih λ0\lambda_0 čini otvoreni interval koji zovemo definitan interval, a bilo koji takav λ0\lambda_0 zovemo definitan pomak. Najjednostavniji algoritmi ispitivanja koje predlažemo temelje se na ispitivanju glavnih podmatrica reda 1 ili 2. Također razvijamo efikasniji algoritam ispitivanja potprostora uz pretpostavku indefinitnosti matrice B. Taj se algoritam temelji na iterativnom ispitivanju malih gusto popunjenih komprimiranih parova koji nastaju korištenjem test-potprostora malih dimenzija, a predlažemo i ubrzanje samog algoritma. Algoritam ispitivanja potprostora posebno je pogodan za velike rijetko popunjene vrpčaste matrične parove, a može se primijeniti u ispitivanju hiperbolnosti kvadratnog svojstvenog problema. U drugom dijelu ove disertacije za dani pozitivno definitni matrični par (A,B)(A, B) reda nn s indefinitnom matricom BB konstruiramo nove algoritme minimizacije traga funkcije f(X)=XHAXf(X)=X^HAX uz uvjet XHBX=diag(Ik+,Ik)X^HBX=diag(I_{k_+}, -I_{k_-}) gdje je XCn×(k++k),1k+n+,1knX \in \mathbb{C}^{n \times (k_++k_-)}, 1 \leq k_+ \leq n_+, 1 \leq k_- \leq n_- i (n+,n,n0)(n_+, n_-, n_0) inercija matrice BB. Predlažemo opći indefinitni algoritam, te razvijamo efikasne algoritme prekondicioniranih gradijentnih iteracija koje smo nazvali indefinitna mm-shema. Stoga metode indefinitne mm-sheme za dani pozitivno definitni par i jedan ili dva definitna pomaka (koji se mogu dobiti algoritmom ispitivanja potprostora) istovremeno računaju manji broj unutarnjih svojstvenih vrijednosti oko definitnog intervala i pridružene svojstvene vektore. Također, dajemo ideje kako računati manji broj svojstvenih vrijednosti oko bilo kojeg broja unutar rubova spektra, a izvan definitnog intervala, i pridruženih svojstvenih vektora, danog pozitivno definitnog matričnog para koristeći pozitivno definitnu matricu prekondicioniranja. Algoritmi su posebno pogodni za velike rijetko popunjene matrične parove. Nizom numeričkih eksperimenata pokazujemo efikasnost samih algoritama ispitivanja i algoritama računanja unutarnjih svojstvenih vrijednosti i pridruženih svojstvenih vektora. Efikasnost naših metoda uspoređujemo s nekim postojećim metodama.The generalized eigenvalue problem (GEP) for given matrices A,BCn×nA, B \in \mathbb{C}^{n \times n} is to find scalars λ\lambda and nonzero vectors xCnx \in \mathbb{C}^n such that Ax=λBxAx = \lambda Bx (1). The pair (λ,x)(\lambda, x) is called an eigenpair, λ\lambda is an eigenvalue and xx corresponding eigenvector. GEP (1) where A and B are both Hermitian, or real symmetric, occurs in many applications of mathematics. Very important case is when B (and A) is positive definite (appearing, e.g., in the finite element discretization of self-adjoint and elliptic PDE-eigenvalue problem [25]). Another very important case is when B (and A) is indefinite, but the matrix pair (A, B) is definite, meaning, there exist real numbers α,β\alpha, \beta such that the matrix αA+βB\alpha A + \beta B is positive definite (appearing, e.g., in mechanics [83] and computational quantum chemistry [4]). Many theoretical properties (variational principles, perturbation theory, etc.) and eigenvalue solvers for Hermitian matrix are extended to definite matrix pairs [64, 79, 83]. A Hermitian matrix pair (A, B) is called positive (negative) definite if there exists a real λ0\lambda_0 such that Aλ0BA- \lambda_0 B is positive ( negative) definite. The set of all such λ0\lambda_0 is an open interval called the definiteness interval [83], and any such λ0\lambda_0 will be called definitizing shift. In the first part of this thesis we propose new algorithms for detecting definite Hermitian matrix pairs (A, B). The most simple algorithms we propose are based on testing the main submatrices of order 1 or 2. These algorithms do not have to give a final answer about (in)definiteness of the given pair, so we develop a more efficient subspace algorithm assuming B is indefinite. Our subspace algorithm for detecting definiteness is based on iterative testing of small full compressed matrix pairs formed using test-subspaces of small dimensions. It is generalization of the method of coordinate relaxation proposed in [36, Section 3.6]. We also propose acceleration of the subspace algorithm in a way that certain linear systems must be solved in every or in some iteration steps. If the matrix pair is definite, the subspace algorithm detects if it is positive or negative definite and returns one definitizing shift. The subspace algorithm is particulary suited for large, sparse and banded matrix pairs, and can be used in testing hyperbolicity of a Hermitian quadratic matrix polynomial. Numerical experiments are given which illustrate efficiency of several variants of our subspace algorithm and comparison is made with an arc algorithm [19, 17, 29]. In the second part of this thesis we are interested in solving partial positive definite GEP (1) where B (and A) is indefinite (both A and B can be singular). Specifically, we are interested in iterative algorithms which will compute a small number of eigenvalues closest to the definiteness interval and corresponding eigenvectors. These algorithms are based on trace minimization property [41, 49]: find the minimum of the trace of the function: f(X)=XHAXf(X)=X^HAX such that XHBX=diag(Ik+,Ik)X^HBX=diag(I_{k_+}, -I_{k_-}) where XCn×(k++k),1k+n+,1knX \in \mathbb{C}^{n \times (k_++k_-)}, 1 \leq k_+ \leq n_+, 1 \leq k_- \leq n_- and (n+,n,n0)(n_+, n_-, n_0) is the inertia of B. The class of algorithms we propose will be preconditioned gradient type iteration, suitable for large and sparse matrices, previously studied for the case with A and/or B are known to be positive definite (for a survey of preconditioned iterations see [3, 39]). In the recent paper [42] an indefinite variants of LOBPCG algorithm [40] were suggested. The authors of [42] were not aware of any other preconditioned eigenvalue solver tailored to definite matrix pairs with indefinite matrices. In this thesis we propose some new preconditioned eigenvalue solvers suitable for this case, which include truncated and extended versions of indefinite LOBPCG from [42]. Our algorithms use one or two definitizing shifts. For the truncated versions of indefinite LOBPCG, which we call indefinite BPSD/A, we derive a sharp convergence estimates. Since for the LOBPCG type algorithms there are still no sharp convergence estimates, estimates derived for BPSD/A type methods serve as an upper (non-sharp) convergence estimates. We also devise some possibilities of using our algorithms to compute a modest number of eigenvalues around any spectral gap of a definite matrix pair (A, B). Numerical experiments are given which illustrate efficiency and some limitations of our algorithms
    corecore