13 research outputs found

    Robust Dropping Criteria for F-norm Minimization Based Sparse Approximate Inverse Preconditioning

    Full text link
    Dropping tolerance criteria play a central role in Sparse Approximate Inverse preconditioning. Such criteria have received, however, little attention and have been treated heuristically in the following manner: If the size of an entry is below some empirically small positive quantity, then it is set to zero. The meaning of "small" is vague and has not been considered rigorously. It has not been clear how dropping tolerances affect the quality and effectiveness of a preconditioner MM. In this paper, we focus on the adaptive Power Sparse Approximate Inverse algorithm and establish a mathematical theory on robust selection criteria for dropping tolerances. Using the theory, we derive an adaptive dropping criterion that is used to drop entries of small magnitude dynamically during the setup process of MM. The proposed criterion enables us to make MM both as sparse as possible as well as to be of comparable quality to the potentially denser matrix which is obtained without dropping. As a byproduct, the theory applies to static F-norm minimization based preconditioning procedures, and a similar dropping criterion is given that can be used to sparsify a matrix after it has been computed by a static sparse approximate inverse procedure. In contrast to the adaptive procedure, dropping in the static procedure does not reduce the setup time of the matrix but makes the application of the sparser MM for Krylov iterations cheaper. Numerical experiments reported confirm the theory and illustrate the robustness and effectiveness of the dropping criteria.Comment: 27 pages, 2 figure

    Factored approximate inverse preconditioners with dynamic sparsity patterns

    Get PDF
    AbstractWe propose two sparsity pattern selection algorithms for factored approximate inverse preconditioners to solve general sparse matrices. The sparsity pattern is adaptively updated in the construction phase by using combined information of the inverse and original triangular factors of the original matrix. In order to determine the sparsity pattern, our first algorithm uses the norm of the inverse factors multiplied by the largest absolute value of the original factors, and the second employs the norm of the inverse factors divided by the norm of the original factors. Experimental results show that these algorithms improve the robustness of the preconditioners to solve general sparse matrices

    Low-rank updates of balanced incomplete factorization preconditioners

    Full text link
    [EN] Let Ax = b be a large and sparse system of linear equations where A is a nonsingular matrix. An approximate solution is frequently obtained by applying preconditioned terations. Consider the matrix B = A + PQT where P,Q ∈ Rn×k are full rank matrices. In this work, we study the problem of updating a previously computed preconditioner for A in order to solve the updated linear system Bx = b by preconditioned iterations. In particular, we propose a method for updating a Balanced Incomplete Factorization preconditioner. The strategy is based on the computation of an approximate Inverse Sherman-Morrison decomposition for an equivalent augmented linear system. Approximation properties of the preconditioned matrix and an analysis of the computational cost of the algorithm are studied. Moreover the results of the numerical experiments with different types of problems show that the proposed method contributes to accelerate the convergence.This work was supported by the Spanish Ministerio de Economia y Competitividad under grant MTM2014-58159-P.Cerdán Soriano, JM.; Marín Mateos-Aparicio, J.; Mas Marí, J. (2017). Low-rank updates of balanced incomplete factorization preconditioners. Numerical Algorithms. 74(2):337-370. https://doi.org/10.1007/s11075-016-0151-6S337370742Bellavia, S., Bertaccini, D., Morini, B.: Nonsymmetric preconditioner updates in Newton-Krylov methods for nonlinear systems. SIAM J. Sci. Comput. 33 (5), 2595–2619 (2011)Benzi, M., Bertaccini, D.: Approximate inverse preconditioning for shifted linear systems. BIT 43(2), 231–244 (2003)Bergamaschi, L., Bru, R., Martínez, A.: Low-rank update of preconditioners for the inexact Newton method with SPD Jacobian. Math. Comput. Model. 54, 1863–1873 (2011)Bergamaschi, L., Bru, R., Martínez, A., Mas, J., Putti, M.: Low-rank update of preconditioners for the nonlinear Richards Equation. Math. Comput. Model. 57, 1933–1941 (2013)Bergamaschi, L., Gondzio, J., Venturin, M., Zilli, G.: Inexact constraint preconditioners for linear systems arising in interior point methods. Comput. Optim. Appl. 36(2-3), 137–147 (2007)Beroiz, M., Hagstrom, T., Lau, S.R., Price, R.H.: Multidomain, sparse, spectral-tau method for helically symmetric flow. Comput. Fluids 102(0), 250–265 (2014)Bertaccini, D.: Efficient preconditioning for sequences of parametric complex symmetric linear systems. Electron. Trans. Numer. Anal. 18, 49–64 (2004)Bollhöfer, M.: A robust and efficient ILU that incorporates the growth of the inverse triangular factors. SIAM J. Sci. Comput. 25(1), 86–103 (2003)Bollhöfer, M., Saad, Y.: On the relations between ILUs and factored approximate inverses. SIAM. J. Matrix Anal. Appl. 24(1), 219–237 (2002)Bru, R., Cerdán, J., Marín, J., Mas, J.: Preconditioning sparse nonsymmetric linear systems with the Sherman-Morrison formula. SIAM J. Sci. Comput. 25(2), 701–715 (2003)Bru, R., Marín, J., Mas, J., Tůma, M.: Balanced incomplete factorization. SIAM J. Sci. Comput. 30(5), 2302–2318 (2008)Bru, R., Marín, J., Mas, J., Tůma, M.: Improved balanced incomplete factorization. SIAM J. Matrix Anal. Appl. 31(5), 2431–2452 (2010)Cerdán, J., Faraj, T., Malla, N., Marín, J., Mas, J.: Block approximate inverse preconditioners for sparse nonsymmetric linear systems. Electron. Trans. Numer. Anal. 37, 23–40 (2010)Cerdán, J., Marín, J., Mas, J., Tůma, M.: Block balanced incomplete factorization. Technical Report No. TR-IMM2015/04, Polytechnic University of Valencia, Spain (2015)Davis, T.A.: University of Florida Sparse Matrix Collection. available online at http://www.cise.ufl.edu/~davis/sparse/ , NA Digest, vol. 94, issue 42, October 1994.Tebbens, J.D., Tůma, M.: Efficient preconditioning of sequences of nonsymmetric linear systems. SIAM J. Sci Comput. 29(5), 1918–1941 (2007)Tebbens, J.D., Tůma, M.: Preconditioner updates for solving sequences of linear systems in matrix-free environment. Numer Linear Algebra Appl. 17, 997–1019 (2010)Embree, M., Sifuentes, J.A., Soodhalter, K.M., Szyld, D.B., Xue, F.: Short-term recurrence Krylov subspace methods for nearly hermitian matrices. SIAM.J. Matrix Anal. Appl. 33-2, 480–500 (2012)Engquist, B., Ying, L.: Sweeping preconditioner for the Helmholtz equation: hierarchical matrix representation. Commun. Pure Appl. Math. 64, 697–735 (2011)Gatto, P., Christiansen, R.E., Hesthaven, J.S.: A preconditioner based on a low-rank approximation with applications to topology optimization. Technical Report EPFL-ARTICLE-207108, École polytechnique fédérale de Lausanne, EPFL, CH-1015 Lausanne, 2015.Grasedyck, L., Hackbusch, W.: Construction and arithmetics of H-matrices. Computing 70(4), 295–334 (2003)Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen 36(1), 53–78 (2013)Greengard, L., Rokhlin, V.: A new version of the fast multipole method for the Laplace equation in three dimensions. Acta Numer. 6(1), 229–269 (1997)Hager, W.W.: Updating the inverse of matrix. SIAM Rev. 31(2), 221–239 (1989)Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53(2), 217–288 (2011)Kelley, C.T.: Solving Nonlinear Equations with Newton’s Method. Fundamentals of algorithms. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2003)Saad, Y.: ILUT: a dual threshold incomplete L U factorization. Numer. Linear Algebra Appl. 1(4), 387–402 (1994)Saad, Y., Schulz, M.H.: GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 7, 856–869 (1986)Simoncini, V., Szyld, D.B.: The effect of non-optimal bases on the convergence of Krylov subspace methods. Numer Math. 100(4), 711–733 (2005)van der Vorst, H.A.: Bi-CGSTAB: a fast and smoothly converging variant of bi-CG for the solution of non-symmetric linear systems. SIAM J. Sci. Stat. Comput. 12, 631–644 (1992

    Symmetric Permutations for I-matrices to Delay and Avoid Small Pivots During Factorization

    Full text link

    Multilinear algebra for analyzing data with multiple linkages.

    Full text link

    VBARMS: A variable block algebraic recursive multilevel solver for sparse linear systems

    Get PDF
    Sparse matrices arising from the solution of systems of partial differential equations often exhibit a perfect block structure. It means that the nonzero blocks in the sparsity pattern are fully dense (and typically small), e.g., when several unknown quantities are associated with the same grid point. However, similar block orderings can be sometimes found also on general unstructured matrices by ordering consecutively rows and columns with a similar sparsity pattern. We also can treat some zero entries of the reordered matrix as nonzero elements to enlarge the blocks to improve the performance. The reordering results in linear systems with blocks of variable size in general. Our recently developed parallel package pVBARMS (parallel variable block algebraic recursive multilevel solver) for distributed memory computers takes advantage of these frequently occurring structures in the design of the multilevel incomplete LU factorization preconditioner. It maximizes computational efficiency and achieves increased throughput during the computation and improved reliability on realistic applications. The method detects automatically any existing block structure in the matrix without any users prior knowledge of the underlying problem, and exploits it to maximize computational efficiency. We proposed a study of performance comparison of pVBAMRS and other popular solvers on a set of general linear systems arising from different application field. We also report on the numerical and parallel scalability of the pVBARMS package for solving the turbulent, Reynolds-averaged, Navier-Stokes (RANS) equations

    A class of linear solvers based on multilevel and supernodal factorization

    Get PDF
    De oplossing van grote en schaarse lineaire systemen is een kritieke component van moderne wetenschap en technische simulaties. Iteratieve methoden, namelijk de klasse van moderne Krylov-subruimtemethoden, worden vaak gebruikt om grootschalige lineaire systemen op te lossen. Om de robuustheid en de convergentiesnelheid van de iteratieve methoden te verbeteren, worden preconditioneringstechnieken vaak beschouwd als cruciale componenten van de lineaire systeemoplossing. In dit proefschrift wordt een klasse van algebraïsche multilevel oplossers gepresenteerd voor het conditioneren van algemene lineaire systeemvergelijkingen die voortkomen uit computationele wetenschap en technische toepassingen. Ze kunnen spaarzame patronen produceren en geheugenkosten besparen door recursieve combinatorische algoritmen toe te passen. Robuustheid wordt verbeterd door de factorisatie te combineren met recent ontwikkelde overlappende en compressiestrategieën en door efficiënte lokale oplossers te gebruiken. We hebben de goede prestaties van de voorgestelde strategieën aangetoond met numerieke experimenten op realistische matrixproblemen, ook in vergelijking met enkele van de meest populaire algebraïsche preconditioners die tegenwoordig worden gebruikt

    A class of linear solvers based on multilevel and supernodal factorization

    Get PDF

    Techniques to accelerate boundary element contributions in elasticity

    Get PDF
    The problem of rapid re-analysis of small problems in elasticity is investigated. The aim is to enable updated stress contours to be displayed in real-time as a design geometry is dynamically modified. The focus of this work is small to medium sized problems; as a result it cannot be assumed that the solution phase dominates, and so the evaluation of boundary integrals is considered as well as the equation solution. Two strategies are employed for acceleration of boundary element integrals: the use of Look-Up Tables (LUTs) containing precomputed integrals and the use of approximate analytical expressions derived from surface fits. These may be used in the matrix assembly and internal point calculations. LUTs are derived for both flat and circular arc elements for both the displacement and stress boundary integral equation. Details are provided on suitable LUT refinements and the approach is benchmarked against conventional Gauss-Legendre quadrature. The surface fit approach is presented as an alternative to LUTs that does not incur the considerable memory cost associated with LUTs. This approach has been limited to flat elements. The equation solution is cast in a re-solution framework, in which we use a GM-RES iterative solver. Convergence is greatly accelerated by using an approximate but complete LU preconditioner updated periodically using multi-threading. Consideration of the period of update is investigated with reference to the spread of eigenvalues in the preconditioned system. The resulting system achieves the aim of providing real time update of contours for small to medium size problems on a PC. This development is expected to allow a qualitative change in the way engineers might use computer aided engineering tools, in which design ideas may rapidly be assessed immediately as a change is made
    corecore