389 research outputs found

    Low-rank updates of balanced incomplete factorization preconditioners

    Full text link
    [EN] Let Ax = b be a large and sparse system of linear equations where A is a nonsingular matrix. An approximate solution is frequently obtained by applying preconditioned terations. Consider the matrix B = A + PQT where P,Q ∈ Rn×k are full rank matrices. In this work, we study the problem of updating a previously computed preconditioner for A in order to solve the updated linear system Bx = b by preconditioned iterations. In particular, we propose a method for updating a Balanced Incomplete Factorization preconditioner. The strategy is based on the computation of an approximate Inverse Sherman-Morrison decomposition for an equivalent augmented linear system. Approximation properties of the preconditioned matrix and an analysis of the computational cost of the algorithm are studied. Moreover the results of the numerical experiments with different types of problems show that the proposed method contributes to accelerate the convergence.This work was supported by the Spanish Ministerio de Economia y Competitividad under grant MTM2014-58159-P.Cerdán Soriano, JM.; Marín Mateos-Aparicio, J.; Mas Marí, J. (2017). Low-rank updates of balanced incomplete factorization preconditioners. Numerical Algorithms. 74(2):337-370. https://doi.org/10.1007/s11075-016-0151-6S337370742Bellavia, S., Bertaccini, D., Morini, B.: Nonsymmetric preconditioner updates in Newton-Krylov methods for nonlinear systems. SIAM J. Sci. Comput. 33 (5), 2595–2619 (2011)Benzi, M., Bertaccini, D.: Approximate inverse preconditioning for shifted linear systems. BIT 43(2), 231–244 (2003)Bergamaschi, L., Bru, R., Martínez, A.: Low-rank update of preconditioners for the inexact Newton method with SPD Jacobian. Math. Comput. Model. 54, 1863–1873 (2011)Bergamaschi, L., Bru, R., Martínez, A., Mas, J., Putti, M.: Low-rank update of preconditioners for the nonlinear Richards Equation. Math. Comput. Model. 57, 1933–1941 (2013)Bergamaschi, L., Gondzio, J., Venturin, M., Zilli, G.: Inexact constraint preconditioners for linear systems arising in interior point methods. Comput. Optim. Appl. 36(2-3), 137–147 (2007)Beroiz, M., Hagstrom, T., Lau, S.R., Price, R.H.: Multidomain, sparse, spectral-tau method for helically symmetric flow. Comput. Fluids 102(0), 250–265 (2014)Bertaccini, D.: Efficient preconditioning for sequences of parametric complex symmetric linear systems. Electron. Trans. Numer. Anal. 18, 49–64 (2004)Bollhöfer, M.: A robust and efficient ILU that incorporates the growth of the inverse triangular factors. SIAM J. Sci. Comput. 25(1), 86–103 (2003)Bollhöfer, M., Saad, Y.: On the relations between ILUs and factored approximate inverses. SIAM. J. Matrix Anal. Appl. 24(1), 219–237 (2002)Bru, R., Cerdán, J., Marín, J., Mas, J.: Preconditioning sparse nonsymmetric linear systems with the Sherman-Morrison formula. SIAM J. Sci. Comput. 25(2), 701–715 (2003)Bru, R., Marín, J., Mas, J., Tůma, M.: Balanced incomplete factorization. SIAM J. Sci. Comput. 30(5), 2302–2318 (2008)Bru, R., Marín, J., Mas, J., Tůma, M.: Improved balanced incomplete factorization. SIAM J. Matrix Anal. Appl. 31(5), 2431–2452 (2010)Cerdán, J., Faraj, T., Malla, N., Marín, J., Mas, J.: Block approximate inverse preconditioners for sparse nonsymmetric linear systems. Electron. Trans. Numer. Anal. 37, 23–40 (2010)Cerdán, J., Marín, J., Mas, J., Tůma, M.: Block balanced incomplete factorization. Technical Report No. TR-IMM2015/04, Polytechnic University of Valencia, Spain (2015)Davis, T.A.: University of Florida Sparse Matrix Collection. available online at http://www.cise.ufl.edu/~davis/sparse/ , NA Digest, vol. 94, issue 42, October 1994.Tebbens, J.D., Tůma, M.: Efficient preconditioning of sequences of nonsymmetric linear systems. SIAM J. Sci Comput. 29(5), 1918–1941 (2007)Tebbens, J.D., Tůma, M.: Preconditioner updates for solving sequences of linear systems in matrix-free environment. Numer Linear Algebra Appl. 17, 997–1019 (2010)Embree, M., Sifuentes, J.A., Soodhalter, K.M., Szyld, D.B., Xue, F.: Short-term recurrence Krylov subspace methods for nearly hermitian matrices. SIAM.J. Matrix Anal. Appl. 33-2, 480–500 (2012)Engquist, B., Ying, L.: Sweeping preconditioner for the Helmholtz equation: hierarchical matrix representation. Commun. Pure Appl. Math. 64, 697–735 (2011)Gatto, P., Christiansen, R.E., Hesthaven, J.S.: A preconditioner based on a low-rank approximation with applications to topology optimization. Technical Report EPFL-ARTICLE-207108, École polytechnique fédérale de Lausanne, EPFL, CH-1015 Lausanne, 2015.Grasedyck, L., Hackbusch, W.: Construction and arithmetics of H-matrices. Computing 70(4), 295–334 (2003)Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen 36(1), 53–78 (2013)Greengard, L., Rokhlin, V.: A new version of the fast multipole method for the Laplace equation in three dimensions. Acta Numer. 6(1), 229–269 (1997)Hager, W.W.: Updating the inverse of matrix. SIAM Rev. 31(2), 221–239 (1989)Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53(2), 217–288 (2011)Kelley, C.T.: Solving Nonlinear Equations with Newton’s Method. Fundamentals of algorithms. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2003)Saad, Y.: ILUT: a dual threshold incomplete L U factorization. Numer. Linear Algebra Appl. 1(4), 387–402 (1994)Saad, Y., Schulz, M.H.: GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 7, 856–869 (1986)Simoncini, V., Szyld, D.B.: The effect of non-optimal bases on the convergence of Krylov subspace methods. Numer Math. 100(4), 711–733 (2005)van der Vorst, H.A.: Bi-CGSTAB: a fast and smoothly converging variant of bi-CG for the solution of non-symmetric linear systems. SIAM J. Sci. Stat. Comput. 12, 631–644 (1992

    Preconditioning issues in the numerical solution of nonlinear equations and nonlinear least squares

    Get PDF
    Second order methods for optimization call for the solution of sequences of linear systems. In this survey we will discuss several issues related to the preconditioning of such sequences. Covered topics include both techniques for building updates of factorized preconditioners and quasi-Newton approaches. Sequences of unsymmetric linear systems arising in Newton-Krylov methods will be considered as well as symmetric positive definite sequences arising in the solution of nonlinear least-squares by Truncated Gauss-Newton methods

    A new preconditioner update strategy for the solution of sequences of linear systems in structural mechanics: application to saddle point problems in elasticity

    Get PDF
    Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method.We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices

    A conjugate gradient algorithm for the astrometric core solution of Gaia

    Full text link
    The ESA space astrometry mission Gaia, planned to be launched in 2013, has been designed to make angular measurements on a global scale with micro-arcsecond accuracy. A key component of the data processing for Gaia is the astrometric core solution, which must implement an efficient and accurate numerical algorithm to solve the resulting, extremely large least-squares problem. The Astrometric Global Iterative Solution (AGIS) is a framework that allows to implement a range of different iterative solution schemes suitable for a scanning astrometric satellite. In order to find a computationally efficient and numerically accurate iteration scheme for the astrometric solution, compatible with the AGIS framework, we study an adaptation of the classical conjugate gradient (CG) algorithm, and compare it to the so-called simple iteration (SI) scheme that was previously known to converge for this problem, although very slowly. The different schemes are implemented within a software test bed for AGIS known as AGISLab, which allows to define, simulate and study scaled astrometric core solutions. After successful testing in AGISLab, the CG scheme has been implemented also in AGIS. The two algorithms CG and SI eventually converge to identical solutions, to within the numerical noise (of the order of 0.00001 micro-arcsec). These solutions are independent of the starting values (initial star catalogue), and we conclude that they are equivalent to a rigorous least-squares estimation of the astrometric parameters. The CG scheme converges up to a factor four faster than SI in the tested cases, and in particular spatially correlated truncation errors are much more efficiently damped out with the CG scheme.Comment: 24 pages, 16 figures. Accepted for publication in Astronomy & Astrophysic

    Composing Scalable Nonlinear Algebraic Solvers

    Get PDF
    Most efficient linear solvers use composable algorithmic components, with the most common model being the combination of a Krylov accelerator and one or more preconditioners. A similar set of concepts may be used for nonlinear algebraic systems, where nonlinear composition of different nonlinear solvers may significantly improve the time to solution. We describe the basic concepts of nonlinear composition and preconditioning and present a number of solvers applicable to nonlinear partial differential equations. We have developed a software framework in order to easily explore the possible combinations of solvers. We show that the performance gains from using composed solvers can be substantial compared with gains from standard Newton-Krylov methods.Comment: 29 pages, 14 figures, 13 table

    New updates of incomplete LU factorizations and applications to large nonlinear systems

    Get PDF
    Abstract In this paper, we address the problem of preconditioning sequences of large sparse nonsymmetric systems of linear equations and present two new strategies to construct approximate updates of factorized preconditioners. Both updates are based on the availability of an incomplete LU (ILU) factorization for one matrix of the sequence and differ in the approximation of the so-called ideal updates. The first strategy is an approximate diagonal update of the ILU factorization; the second strategy relies on banded approximations of the factors in the ideal update. The efficiency and reliability of the proposed preconditioners are shown in the solution of nonlinear systems of equations by preconditioned inexact Newton-Krylov methods. Matrix-free implementations of the updating strategy are provided and numerical experiments are carried out on application problems

    A matrix-free preconditioner for sparse symmetric positive definite systems and least-squares problems

    Get PDF

    Constraint-Preconditioned Krylov Solvers for Regularized Saddle-Point Systems

    Full text link
    We consider the iterative solution of regularized saddle-point systems. When the leading block is symmetric and positive semi-definite on an appropriate subspace, Dollar, Gould, Schilders, and Wathen (2006) describe how to apply the conjugate gradient (CG) method coupled with a constraint preconditioner, a choice that has proved to be effective in optimization applications. We investigate the design of constraint-preconditioned variants of other Krylov methods for regularized systems by focusing on the underlying basis-generation process. We build upon principles laid out by Gould, Orban, and Rees (2014) to provide general guidelines that allow us to specialize any Krylov method to regularized saddle-point systems. In particular, we obtain constraint-preconditioned variants of Lanczos and Arnoldi-based methods, including the Lanczos version of CG, MINRES, SYMMLQ, GMRES(m) and DQGMRES. We also provide MATLAB implementations in hopes that they are useful as a basis for the development of more sophisticated software. Finally, we illustrate the numerical behavior of constraint-preconditioned Krylov solvers using symmetric and nonsymmetric systems arising from constrained optimization.Comment: Accepted for publication in the SIAM Journal on Scientific Computin
    • …
    corecore