37 research outputs found

    Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in hypre and PETSc

    Full text link
    We describe our software package Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) publicly released recently. BLOPEX is available as a stand-alone serial library, as an external package to PETSc (``Portable, Extensible Toolkit for Scientific Computation'', a general purpose suite of tools for the scalable solution of partial differential equations and related problems developed by Argonne National Laboratory), and is also built into {\it hypre} (``High Performance Preconditioners'', scalable linear solvers package developed by Lawrence Livermore National Laboratory). The present BLOPEX release includes only one solver--the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method for symmetric eigenvalue problems. {\it hypre} provides users with advanced high-quality parallel preconditioners for linear systems, in particular, with domain decomposition and multigrid preconditioners. With BLOPEX, the same preconditioners can now be efficiently used for symmetric eigenvalue problems. PETSc facilitates the integration of independently developed application modules with strict attention to component interoperability, and makes BLOPEX extremely easy to compile and use with preconditioners that are available via PETSc. We present the LOBPCG algorithm in BLOPEX for {\it hypre} and PETSc. We demonstrate numerically the scalability of BLOPEX by testing it on a number of distributed and shared memory parallel systems, including a Beowulf system, SUN Fire 880, an AMD dual-core Opteron workstation, and IBM BlueGene/L supercomputer, using PETSc domain decomposition and {\it hypre} multigrid preconditioning. We test BLOPEX on a model problem, the standard 7-point finite-difference approximation of the 3-D Laplacian, with the problem size in the range 105−10810^5-10^8.Comment: Submitted to SIAM Journal on Scientific Computin

    Preconditioned Spectral Clustering for Stochastic Block Partition Streaming Graph Challenge

    Full text link
    Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is demonstrated to efficiently solve eigenvalue problems for graph Laplacians that appear in spectral clustering. For static graph partitioning, 10-20 iterations of LOBPCG without preconditioning result in ~10x error reduction, enough to achieve 100% correctness for all Challenge datasets with known truth partitions, e.g., for graphs with 5K/.1M (50K/1M) Vertices/Edges in 2 (7) seconds, compared to over 5,000 (30,000) seconds needed by the baseline Python code. Our Python code 100% correctly determines 98 (160) clusters from the Challenge static graphs with 0.5M (2M) vertices in 270 (1,700) seconds using 10GB (50GB) of memory. Our single-precision MATLAB code calculates the same clusters at half time and memory. For streaming graph partitioning, LOBPCG is initiated with approximate eigenvectors of the graph Laplacian already computed for the previous graph, in many cases reducing 2-3 times the number of required LOBPCG iterations, compared to the static case. Our spectral clustering is generic, i.e. assuming nothing specific of the block model or streaming, used to generate the graphs for the Challenge, in contrast to the base code. Nevertheless, in 10-stage streaming comparison with the base code for the 5K graph, the quality of our clusters is similar or better starting at stage 4 (7) for emerging edging (snowballing) streaming, while the computations are over 100-1000 faster.Comment: 6 pages. To appear in Proceedings of the 2017 IEEE High Performance Extreme Computing Conference. Student Innovation Award Streaming Graph Challenge: Stochastic Block Partition, see http://graphchallenge.mit.edu/champion

    A parallel implementation of Davidson methods for large-scale eigenvalue problems in SLEPc

    Full text link
    In the context of large-scale eigenvalue problems, methods of Davidson type such as Jacobi-Davidson can be competitive with respect to other types of algorithms, especially in some particularly difficult situations such as computing interior eigenvalues or when matrix factorization is prohibitive or highly inefficient. However, these types of methods are not generally available in the form of high-quality parallel implementations, especially for the case of non-Hermitian eigenproblems. We present our implementation of various Davidson-type methods in SLEPc, the Scalable Library for Eigenvalue Problem Computations. The solvers incorporate many algorithmic variants for subspace expansion and extraction, and cover a wide range of eigenproblems including standard and generalized, Hermitian and non-Hermitian, with either real or complex arithmetic. We provide performance results on a large battery of test problems.This work was supported by the Spanish Ministerio de Ciencia e Innovacion under project TIN2009-07519. Author's addresses: E. Romero, Institut I3M, Universitat Politecnica de Valencia, Cami de Vera s/n, 46022 Valencia, Spain), and J. E. Roman, Departament de Sistemes Informatics i Computacio, Universitat Politecnica de Valencia, Cami de Vera s/n, 46022 Valencia, Spain; email: [email protected] Alcalde, E.; Román Moltó, JE. (2014). A parallel implementation of Davidson methods for large-scale eigenvalue problems in SLEPc. ACM Transactions on Mathematical Software. 40(2):13:01-13:29. https://doi.org/10.1145/2543696S13:0113:29402P. Arbenz, M. Becka, R. Geus, U. Hetmaniuk, and T. Mengotti. 2006. On a parallel multilevel preconditioned Maxwell eigensolver. Parallel Comput. 32, 2, 157--165.Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, Eds. 2000. Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. SIAM, Philadelphia, PA.C. G. Baker, U. L. Hetmaniuk, R. B. Lehoucq, and H. K. Thornquist. 2009. Anasazi software for the numerical solution of large-scale eigenvalue problems. ACM Trans. Math. Softw. 36, 3, 13:1--13:23.S. Balay, J. Brown, K. Buschelman, V. Eijkhout, W. Gropp, D. Kaushik, M. Knepley, L. C. McInnes, B. Smith, and H. Zhang. 2011. PETSc users manual. Tech. Rep. ANL-95/11-Revision 3.2, Argonne National Laboratory.S. Balay, W. D. Gropp, L. C. McInnes, and B. F. Smith. 1997. Efficient management of parallelism in object oriented numerical software libraries. In Modern Software Tools in Scientific Computing, E. Arge, A. M. Bruaset, and H. P. Langtangen, Eds., Birkhaüser, 163--202.M. A. Brebner and J. Grad. 1982. Eigenvalues of Ax =λ Bx for real symmetric matrices A and B computed by reduction to a pseudosymmetric form and the HR process. Linear Algebra Appl. 43, 99--118.C. Campos, J. E. Roman, E. Romero, and A. Tomas. 2011. SLEPc users manual. Tech. Rep. DSICII/24/02 - Revision 3.2, D. Sistemes Informàtics i Computació, Universitat Politècnica de València. http://www.grycap.upv.es/slepc.T. Dannert and F. Jenko. 2005. Gyrokinetic simulation of collisionless trapped-electronmode turbulence. Phys. Plasmas 12, 7, 072309.E. R. Davidson. 1975. The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices. J. Comput. Phys. 17, 1, 87--94.T. A. Davis and Y. Hu. 2011. The University of Florida Sparse Matrix Collection. ACM Trans. Math. Softw. 38, 1, 1:1--1:25.H. C. Elman, A. Ramage, and D. J. Silvester. 2007. Algorithm 866: IFISS, a Matlab toolbox for modelling incompressible flow. ACM Trans. Math. Softw. 33, 2. Article 14.T. Ericsson and A. Ruhe. 1980. The spectral transformation Lanczos method for the numerical solution of large sparse generalized symmetric eigenvalue problems. Math. Comp. 35, 152, 1251--1268.M. Ferronato, C. Janna, and G. Pini. 2012. Efficient parallel solution to large-size sparse eigenproblems with block FSAI preconditioning. Numer. Linear Algebra Appl. 19, 5, 797--815.D. R. Fokkema, G. L. G. Sleijpen, and H. A. van der Vorst. 1998. Jacobi--Davidson style QR and QZ algorithms for the reduction of matrix pencils. SIAM J. Sci. Comput. 20, 1, 94--125.M. A. Freitag and A. Spence. 2007. Convergence theory for inexact inverse iteration applied to the generalised nonsymmetric eigenproblem. Electron. Trans. Numer. Anal. 28, 40--64.M. Genseberger. 2010. Improving the parallel performance of a domain decomposition preconditioning technique in the Jacobi-Davidson method for large scale eigenvalue problems. App. Numer. Math. 60, 11, 1083--1099.V. Hernandez, J. E. Roman, and A. Tomas. 2007. Parallel Arnoldi eigensolvers with enhanced scalability via global communications rearrangement. Parallel Comput. 33, 7--8, 521--540.V. Hernandez, J. E. Roman, and V. Vidal. 2005. SLEPc: A scalable and flexible toolkit for the solution of eigenvalue problems. ACM Trans. Math. Softw. 31, 3, 351--362.V. Heuveline, B. Philippe, and M. Sadkane. 1997. Parallel computation of spectral portrait of large matrices by Davidson type methods. Numer. Algor. 16, 1, 55--75.M. E. Hochstenbach. 2005a. Generalizations of harmonic and refined Rayleigh-Ritz. Electron. Trans. Numer. Anal. 20, 235--252.M. E. Hochstenbach. 2005b. Variations on harmonic Rayleigh--Ritz for standard and generalized eigenproblems. Preprint, Department of Mathematics, Case Western Reserve University.M. E. Hochstenbach and Y. Notay. 2006. The Jacobi--Davidson method. GAMM Mitt. 29, 2, 368--382.F.-N. Hwang, Z.-H. Wei, T.-M. Huang, and W. Wang. 2010. A parallel additive Schwarz preconditioned Jacobi-Davidson algorithm for polynomial eigenvalue problems in quantum dot simulation. J. Comput. Phys. 229, 8, 2932--2947.A. V. Knyazev. 2001. Toward the optimal preconditioned eigensolver: Locally optimal block preconditioned conjugate gradient method. SIAM J. Sci. Comput. 23, 2, 517--541.A. V. Knyazev, M. E. Argentati, I. Lashuk, and E. E. Ovtchinnikov. 2007. Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in HYPRE and PETSc. SIAM J. Sci. Comput. 29, 5, 2224--2239.J. Kopal, M. Rozložník, M. Tuma, and A. Smoktunowicz. 2012. Rounding error analysis of orthogonalization with a non-standard inner product. Numer. Math. 52, 4, 1035--1058.D. Kressner. 2006. Block algorithms for reordering standard and generalized Schur forms. ACM Trans. Math. Softw. 32, 4, 521--532.R. B. Lehoucq, D. C. Sorensen, and C. Yang. 1998. ARPACK Users' Guide, Solution of Large-Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods. SIAM, Philadelphia, PA.Z. Li, Y. Saad, and M. Sosonkina. 2003. pARMS: a parallel version of the algebraic recursive multilevel solver. Numer. Linear Algebra Appl. 10, 5--6, 485--509.J. R. McCombs and A. Stathopoulos. 2006. Iterative validation of eigensolvers: a scheme for improving the reliability of Hermitian eigenvalue solvers. SIAM J. Sci. Comput. 28, 6, 2337--2358.F. Merz, C. Kowitz, E. Romero, J. E. Roman, and F. Jenko. 2012. Multi-dimensional gyrokinetic parameter studies based on eigenvalues computations. Comput. Phys. Commun. 183, 4, 922--930.R. B. Morgan. 1990. Davidson's method and preconditioning for generalized eigenvalue problems. J. Comput. Phys. 89, 241--245.R. B. Morgan. 1991. Computing interior eigenvalues of large matrices. Linear Algebra Appl. 154--156, 289--309.R. B. Morgan and D. S. Scott. 1986. Generalizations of Davidson's method for computing eigenvalues of sparse symmetric matrices. SIAM J. Sci. Statist. Comput. 7, 3, 817--825.R. Natarajan and D. Vanderbilt. 1989. A new iterative scheme for obtaining eigenvectors of large, real-symmetric matrices. J. Comput. Phys. 82, 1, 218--228.M. Nool and A. van der Ploeg. 2000. A parallel Jacobi--Davidson-type method for solving large generalized eigenvalue problems in magnetohydrodynamics. SIAM J. Sci. Comput. 22, 1, 95--112.J. Olsen, P. Jørgensen, and J. Simons. 1990. Passing the one-billion limit in full configuration-interaction (FCI) calculations. Chem. Phys. Lett. 169, 6, 463--472.C. C. Paige, B. N. Parlett, and H. A. van der Vorst. 1995. Approximate solutions and eigenvalue bounds from Krylov subspaces. Numer. Linear Algebra Appl. 2, 2, 115--133.E. Romero and J. E. Roman. 2011. Computing subdominant unstable modes of turbulent plasma with a parallel Jacobi--Davidson eigensolver. Concur. Comput.: Pract. Exp. 23, 17, 2179--2191.Y. Saad. 1993. A flexible inner-outer preconditioned GMRES algorithm. SIAM J. Sci. Comput. 14, 2, 461--469.G. L. G. Sleijpen, A. G. L. Booten, D. R. Fokkema, and H. A. van der Vorst. 1996. Jacobi-Davidson type methods for generalized eigenproblems and polynomial eigenproblems. BIT 36, 3, 595--633.G. L. G. Sleijpen and H. A. van der Vorst. 1996. A Jacobi--Davidson iteration method for linear eigenvalue problems. SIAM J. Matrix Anal. Appl. 17, 2, 401--425.G. L. G. Sleijpen and H. A. van der Vorst. 2000. A Jacobi--Davidson iteration method for linear eigenvalue problems. SIAM Rev. 42, 2, 267--293.G. L. G. Sleijpen, H. A. van der Vorst, and E. Meijerink. 1998. Efficient expansion of subspaces in the Jacobi--Davidson method for standard and generalized eigenproblems. Electron. Trans. Numer. Anal. 7, 75--89.A. Stathopoulos. 2007. Nearly optimal preconditioned methods for Hermitian eigenproblems under limited memory. Part I: Seeking one eigenvalue. SIAM J. Sci. Comput. 29, 2, 481--514.A. Stathopoulos and J. R. McCombs. 2007. Nearly optimal preconditioned methods for Hermitian eigenproblems under limited memory. Part II: Seeking many eigenvalues. SIAM J. Sci. Comput. 29, 5, 2162--2188.A. Stathopoulos and J. R. McCombs. 2010. PRIMME: PReconditioned Iterative MultiMethod Eigensolver: Methods and software description. ACM Trans. Math. Softw. 37, 2, 21:1--21:30.A. Stathopoulos and Y. Saad. 1998. Restarting techniques for the (Jacobi-)Davidson symmetric eigenvalue methods. Electron. Trans. Numer. Anal. 7, 163--181.A. Stathopoulos, Y. Saad, and C. F. Fischer. 1995. Robust preconditioning of large, sparse, symmetric eigenvalue problems. J. Comput. Appl. Math. 64, 3, 197--215.A. Stathopoulos, Y. Saad, and K. Wu. 1998. Dynamic thick restarting of the Davidson, and the implicitly restarted Arnoldi methods. SIAM J. Sci. Comput. 19, 1, 227--245.G. W. Stewart. 2001. Matrix Algorithms. Volume II: Eigensystems. SIAM, Philadelphia, PA.H. A. van der Vorst. 2002. Computational methods for large eigenvalue problems. In Handbook of Numerical Analysis, P. G. Ciarlet and J. L. Lions, Eds., Vol. VIII, Elsevier, 3--179.H. A. van der Vorst. 2004. Modern methods for the iterative computation of eigenpairs of matrices of high dimension. Z. Angew. Math. Mech. 84, 7, 444--451.T. van Noorden and J. Rommes 2007. Computing a partial generalized real Schur form using the Jacobi--Davidson method. Numer. Linear Algebra Appl. 14, 3, 197--215.T. D. Young, E. Romero, and J. E. Roman. 2013. Parallel finite element density functional computations exploiting grid refinement and subspace recycling. Comput. Phys. Commun. 184, 1, 66--72

    A robust and efficient implementation of LOBPCG

    Full text link
    Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is widely used to compute eigenvalues of large sparse symmetric matrices. The algorithm can suffer from numerical instability if it is not implemented with care. This is especially problematic when the number of eigenpairs to be computed is relatively large. In this paper we propose an improved basis selection strategy based on earlier work by Hetmaniuk and Lehoucq as well as a robust convergence criterion which is backward stable to enhance the robustness. We also suggest several algorithmic optimizations that improve performance of practical LOBPCG implementations. Numerical examples confirm that our approach consistently and significantly outperforms previous competing approaches in both stability and speed

    Evaluation of Directive-Based GPU Programming Models on a Block Eigensolver with Consideration of Large Sparse Matrices

    Get PDF
    Achieving high performance and performance portability for large-scale scientific applications is a major challenge on heterogeneous computing systems such as many-core CPUs and accelerators like GPUs. In this work, we implement a widely used block eigensolver, Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG), using two popular directive based programming models (OpenMP and OpenACC) for GPU-accelerated systems. Our work differs from existing work in that it adopts a holistic approach that optimizes the full solver performance rather than narrowing the problem into small kernels (e.g., SpMM, SpMV). Our LOPBCG GPU implementation achieves a 2.8×{\times }–4.3×{\times } speedup over an optimized CPU implementation when tested with four different input matrices. The evaluated configuration compared one Skylake CPU to one Skylake CPU and one NVIDIA V100 GPU. Our OpenMP and OpenACC LOBPCG GPU implementations gave nearly identical performance. We also consider how to create an efficient LOBPCG solver that can solve problems larger than GPU memory capacity. To this end, we create microbenchmarks representing the two dominant kernels (inner product and SpMM kernel) in LOBPCG and then evaluate performance when using two different programming approaches: tiling the kernels, and using Unified Memory with the original kernels. Our tiled SpMM implementation achieves a 2.9×{\times } and 48.2×{\times } speedup over the Unified Memory implementation on supercomputers with PCIe Gen3 and NVLink 2.0 CPU to GPU interconnects, respectively

    Implementación Paralela del Método de Minimización de la Traza para el Problema de Valores Propios Generalizado Simétrico

    Full text link
    En este trabajo, se describen una implementación paralela del método de Minimización de la Traza propuesto por Sameh y Wisniewski. La implementación incluye varias técnicas propuestas en un artículo posterior de Sameh, tales como multidesplazamientos, precondicionado y terminación adaptativa del sistema lineal, que acelera el método y le confiere robustez. La variante de tipo Davidson también ha sido considerada. Por último se han analizado las prestaciones secuenciales y paralelas de los diferentes métodos.Romero Alcalde, E. (2008). Implementación Paralela del Método de Minimización de la Traza para el Problema de Valores Propios Generalizado Simétrico. http://hdl.handle.net/10251/12260Archivo delegad
    corecore