21 research outputs found

    Application of the Jacobi Davidson method for spectral low-rank preconditioning in computational electromagnetics problems

    Full text link
    [EN] We consider the numerical solution of linear systems arising from computational electromagnetics applications. For large scale problems the solution is usually obtained iteratively with a Krylov subspace method. It is well known that for ill conditioned problems the convergence of these methods can be very slow or even it may be impossible to obtain a satisfactory solution. To improve the convergence a preconditioner can be used, but in some cases additional strategies are needed. In this work we study the application of spectral lowrank updates (SLRU) to a previously computed sparse approximate inverse preconditioner.The updates are based on the computation of a small subset of the eigenpairs closest to the origin. Thus, the performance of the SLRU technique depends on the method available to compute the eigenpairs of interest. The SLRU method was first used using the IRA s method implemented in ARPACK. In this work we investigate the use of a Jacobi Davidson method, in particular its JDQR variant. The results of the numerical experiments show that the application of the JDQR method to obtain the spectral low-rank updates can be quite competitive compared with the IRA s method.Mas Marí, J.; Cerdán Soriano, JM.; Malla Martínez, N.; Marín Mateos-Aparicio, J. (2015). Application of the Jacobi Davidson method for spectral low-rank preconditioning in computational electromagnetics problems. Journal of the Spanish Society of Applied Mathematics. 67:39-50. doi:10.1007/s40324-014-0025-6S395067Bergamaschi, L., Pini, G., Sartoretto, F.: Computational experience with sequential, and parallel, preconditioned Jacobi–Davidson for large sparse symmetric matrices. J. Comput. Phys. 188(1), 318–331 (2003)Carpentieri, B.: Sparse preconditioners for dense linear systems from electromagnetics applications. PhD thesis, Institut National Polytechnique de Toulouse, CERFACS (2002)Carpentieri, B., Duff, I.S., Giraud, L.: Sparse pattern selection strategies for robust Frobenius-norm minimization preconditioners in electromagnetism. Numer. Linear Algebr. Appl. 7(7–8), 667–685 (2000)Carpentieri, B., Duff, I.S., Giraud, L.: A class of spectral two-level preconditioners. SIAM J. Sci. Comput. 25(2), 749–765 (2003)Carpentieri, B., Duff, I.S., Giraud, L., Magolu monga Made, M.: Sparse symmetric preconditioners for dense linear systems in electromagnetism. Numer. Linear Algebr. Appl. 11(8–9), 753–771 (2004)Carpentieri, B., Duff, I.S., Giraud, L., Sylvand, G.: Combining fast multipole techniques and an approximate inverse preconditioner for large electromagnetism calculations. SIAM J. Sci. Comput. 27(3), 774–792 (2005)Darve, E.: The fast multipole method I: error analysis and asymptotic complexity. SIAM J. Numer. Anal. 38(1), 98–128 (2000)Fokkema, D.R., Sleijpen, G.L., Van der Vorst, H.A.: Jacobi–Davidson style QR and QZ algorithms for the reduction of matrix pencils. SIAM J. Sci. Comput. 20(1), 94–125 (1998)Greengard, L., Rokhlin, V.: A fast algorithm for particle simulations. J. Comput. Phys. 73(3), 325–348 (1987)Grote, M., Huckle, T.: Parallel preconditioning with sparse approximate inverses. SIAM J. Sci. Comput. 18(3), 838–853 (1997)Harrington, R.: Origin and development of the method of moments for field computation. IEEE Antenna Propag. Mag. (1990)Kunz, K.S., Luebbers, R.J.: The finite difference time domain method for electromagnetics. SIAM J. Sci. Comput. 18(3), 838–853 (1997)Maxwell, J.C.: A dynamical theory of the electromagnetic field. Roy. S. Trans. CLV, (1864). Reprinted in Tricker, R. A. R. The Contributions of Faraday and Maxwell to Electrial Science, Pergamon Press (1966)Marín, J., Malla M.: Some experiments preconditioning via spectral low rank updates for electromagnetism applications. In: Proceedings of the international conference on preconditioning techniques for large sparse matrix problems in scientific and industrial applications (Preconditioning 2007), Toulouse (2007)Meijerink, J.A., van der Vorst, H.A.: An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix. Math. Comput. 31, 148–162 (1977)Sorensen, D.C., Lehoucq, R.B., Yang, C.: ARPACK users’ guide: solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods. SIAM, Philadelphia (1998)Rao, S.M., Wilton, D.R., Glisson, A.W.: Electromagnetic scattering by surfaces of arbitrary shape. IEEE Trans. Antenna Propag. 30, 409–418 (1982)Saad, Y.: Iterative methods for sparse linear systems. PWS Publishing Company, Boston (1996)Silvester, P.P., Ferrari, R.L.: Finite elements for electrical engineers. Cambridge University Press, Cambridge (1990)Sleijpen, S.L., van der Vorst, H.A.: A Jacobi–Davidson iteration method for linear eigenvalue problems. SIAM J. Matrix Anal. Appl. 17, 401–425 (1996)van der Vorst, H.A.: Bi-CGSTAB: a fast and smoothly converging variant of Bi-CG for the solution of non-symmetric linear systems. SIAM J. Sci. Stat. Comput. 12(6), 631–644 (1992

    Spectral preconditioners for the efficient numerical solution of a continuous branched transport model

    Get PDF
    We consider the efficient solution of sequences of linear systems arising in the numerical solution of a branched transport model whose long time solution for specific parameter settings is equivalent to the solution of the Monge\u2013Kantorovich equations of optimal transport. Galerkin Finite Element discretization combined with explicit Euler time stepping yield a linear system to be solved at each time step, characterized by a large sparse very ill conditioned symmetric positive definite (SPD) matrix . Extreme cases even prevent the convergence of Preconditioned Conjugate Gradient (PCG) with standard preconditioners such as an Incomplete Cholesky (IC) factorization of , which cannot always be computed. We investigate several preconditioning strategies that incorporate partial approximated spectral information. We present numerical evidence that the proposed techniques are efficient in reducing the condition number of the preconditioned systems, thus decreasing the number of PCG iterations and the overall CPU time

    Parallel Matrix-free polynomial preconditioners with application to flow simulations in discrete fracture networks

    Get PDF
    We develop a robust matrix-free, communication avoiding parallel, high-degree polynomial preconditioner for the Conjugate Gradient method for large and sparse symmetric positive definite linear systems. We discuss the selection of a scaling parameter aimed at avoiding unwanted clustering of eigenvalues of the preconditioned matrices at the extrema of the spectrum. We use this preconditioned framework to solve a 3Ă—33 \times 3 block system arising in the simulation of fluid flow in large-size discrete fractured networks. We apply our polynomial preconditioner to a suitable Schur complement related with this system, which can not be explicitly computed because of its size and density. Numerical results confirm the excellent properties of the proposed preconditioner up to very high polynomial degrees. The parallel implementation achieves satisfactory scalability by taking advantage from the reduced number of scalar products and hence of global communications

    A new preconditioner update strategy for the solution of sequences of linear systems in structural mechanics: application to saddle point problems in elasticity

    Get PDF
    Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method.We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices

    Multilinear algebra for analyzing data with multiple linkages.

    Full text link

    Deflation and augmentation techniques in Krylov linear solvers

    Get PDF
    Preliminary version of the book chapter entitled "Deflation and augmentation techniques in Krylov linear solvers" published in "Developments in Parallel, Distributed, Grid and Cloud Computing for Engineering", ed. Topping, B.H.V and Ivanyi, P., Saxe-Coburg Publications, Kippen, Stirlingshire, United Kingdom, ISBN 978-1-874672-62-3, p. 249-275, 2013In this paper we present deflation and augmentation techniques that have been designed to accelerate the convergence of Krylov subspace methods for the solution of linear systems of equations. We review numerical approaches both for linear systems with a non-Hermitian coefficient matrix, mainly within the Arnoldi framework, and for Hermitian positive definite problems with the conjugate gradient method.Dans ce rapport nous présentons des techniques de déflation et d'augmentation qui ont été développées pour accélérer la convergence des méthodes de Krylov pour la solution de systémes d'équations linéaires. Nous passons en revue des approches pour des systémes linéaires dont les matrices sont non-hermitiennes, principalement dans le contexte de la méthode d'Arnoldi, et pour des matrices hermitiennes définies positives avec la méthode du gradient conjugué
    corecore