4 research outputs found

    Balanced incomplete factorization preconditioner with pivoting

    Get PDF
    [EN] In this work we study pivoting strategies for the preconditioner presented in Bru (SIAM J Sci Comput 30(5):2302-2318, 2008) which computes the LU factorization of a matrix A. This preconditioner is based on the Inverse Sherman Morrison (ISM) decomposition [Preconditioning sparse nonsymmetric linear systems with the Sherman-Morrison formula. Bru (SIAM J Sci Comput 25(2):701-715, 2003), that using recursion formulas derived from the Sherman-Morrison formula, obtains the direct and inverse LU factors of a matrix. We present a modification of the ISM decomposition that allows for pivoting, and so the computation of preconditioners for any nonsingular matrix. While the ISM algorithm at a given step computes only a new pair of vectors, the new pivoting algorithm in the k-th step also modifies all the remaining vectors from k + 1 to n. Thus, it can be seen as a right looking version of the ISM decomposition. The results of numerical experiments with ill-conditioned and highly indefinite matrices arising from different applications show the robustness of the new algorithm, since it is able to solve problems that are not possible to solve otherwise.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. The work was supported by Conselleria de Innovacion, Universidades, Ciencia y Sociedad Digital, Generalitat Valenciana (CIAICO/2021/162).Marín Mateos-Aparicio, J.; Mas Marí, J. (2023). Balanced incomplete factorization preconditioner with pivoting. Revista de la Real Academia de Ciencias Exactas Físicas y Naturales Serie A Matemáticas. 117(1). https://doi.org/10.1007/s13398-022-01334-1117

    Multilinear algebra for analyzing data with multiple linkages.

    Full text link

    Preconditioning for Sparse Linear Systems at the Dawn of the 21st Century: History, Current Developments, and Future Perspectives

    Get PDF
    Iterative methods are currently the solvers of choice for large sparse linear systems of equations. However, it is well known that the key factor for accelerating, or even allowing for, convergence is the preconditioner. The research on preconditioning techniques has characterized the last two decades. Nowadays, there are a number of different options to be considered when choosing the most appropriate preconditioner for the specific problem at hand. The present work provides an overview of the most popular algorithms available today, emphasizing the respective merits and limitations. The overview is restricted to algebraic preconditioners, that is, general-purpose algorithms requiring the knowledge of the system matrix only, independently of the specific problem it arises from. Along with the traditional distinction between incomplete factorizations and approximate inverses, the most recent developments are considered, including the scalable multigrid and parallel approaches which represent the current frontier of research. A separate section devoted to saddle-point problems, which arise in many different applications, closes the paper

    Utilización del paralelismo multihebra en el precondicionado y la resolución iterativa de sistemas lineales dispersos

    Get PDF
    La resolución eficiente de sistemas de ecuaciones lineales dispersos y de gran dimensión es uno de los problemas del álgebra lineal moderna que surge con mayor frecuencia en aplicaciones científicas e ingenieriles. La incesante demanda de mayor precisión y realismo en las simulaciones requiere el uso de modelos computacionales tridimensionales cada vez más elaborados, lo que se traduce en un aumento del tamaño y complejidad de los sistemas y del tiempo de simulación. La resolución de estos sistemas en un tiempo razonable requiere algoritmos con un alto grado de eficiencia y escalabilidad algorítmica, es decir, resolutores cuyas demandas computacionales y de memoria sólo crezcan moderadamente con el tamaño del sistema, algoritmos y software paralelos capaces de extraer la concurrencia inherente en estos métodos, y arquitecturas de computadores paralelas que dispongan de los suficientes recursos computacionales. En esta línea, el trabajo realizado en la tesis ha afrontado el análisis, desarrollo e implementación de algoritmos paralelos capaces de identificar, extraer y aprovechar eficientemente el paralelismo de tareas disponible en los resolutores algebraicos multinivel de la biblioteca numérica ILUPACK. La tesis demuestra experimentalmente, en el marco de los sistemas de ecuaciones lineales dispersos y de gran dimensión que aparecen ligados a varias EDPs bidimensionales y tridimensionales, que el grado de paralelismo de tareas presente en los métodos numéricos de ILUPACK es suficiente para la ejecución eficiente de implementaciones paralelas de estos métodos sobre multiprocesadores de memoria compartida con un número moderado de procesadores
    corecore