63 research outputs found

    A Survey on Intelligent Iterative Methods for Solving Sparse Linear Algebraic Equations

    Full text link
    Efficiently solving sparse linear algebraic equations is an important research topic of numerical simulation. Commonly used approaches include direct methods and iterative methods. Compared with the direct methods, the iterative methods have lower computational complexity and memory consumption, and are thus often used to solve large-scale sparse linear equations. However, there are numerous iterative methods, parameters and components needed to be carefully chosen, and an inappropriate combination may eventually lead to an inefficient solution process in practice. With the development of deep learning, intelligent iterative methods become popular in these years, which can intelligently make a sufficiently good combination, optimize the parameters and components in accordance with the properties of the input matrix. This survey then reviews these intelligent iterative methods. To be clearer, we shall divide our discussion into three aspects: a method aspect, a component aspect and a parameter aspect. Moreover, we summarize the existing work and propose potential research directions that may deserve a deep investigation

    Iterative solution of linear systems with improved arithmetic and result verification [online]

    Get PDF

    VBARMS: A variable block algebraic recursive multilevel solver for sparse linear systems

    Get PDF
    Sparse matrices arising from the solution of systems of partial differential equations often exhibit a perfect block structure. It means that the nonzero blocks in the sparsity pattern are fully dense (and typically small), e.g., when several unknown quantities are associated with the same grid point. However, similar block orderings can be sometimes found also on general unstructured matrices by ordering consecutively rows and columns with a similar sparsity pattern. We also can treat some zero entries of the reordered matrix as nonzero elements to enlarge the blocks to improve the performance. The reordering results in linear systems with blocks of variable size in general. Our recently developed parallel package pVBARMS (parallel variable block algebraic recursive multilevel solver) for distributed memory computers takes advantage of these frequently occurring structures in the design of the multilevel incomplete LU factorization preconditioner. It maximizes computational efficiency and achieves increased throughput during the computation and improved reliability on realistic applications. The method detects automatically any existing block structure in the matrix without any users prior knowledge of the underlying problem, and exploits it to maximize computational efficiency. We proposed a study of performance comparison of pVBAMRS and other popular solvers on a set of general linear systems arising from different application field. We also report on the numerical and parallel scalability of the pVBARMS package for solving the turbulent, Reynolds-averaged, Navier-Stokes (RANS) equations

    On Multiscale Algorithms for Selected Applications in Molecular Mechanics

    Get PDF

    A Recommendation System for Preconditioned Iterative Solvers

    Get PDF
    Solving linear systems of equations is an integral part of most scientific simulations. In recent years, there has been a considerable interest in large scale scientific simulation of complex physical processes. Iterative solvers are usually preferred for solving linear systems of such magnitude due to their lower computational requirements. Currently, computational scientists have access to a multitude of iterative solver options available as "plug-and- play" components in various problem solving environments. Choosing the right solver configuration from the available choices is critical for ensuring convergence and achieving good performance, especially for large complex matrices. However, identifying the "best" preconditioned iterative solver and parameters is challenging even for an expert due to issues such as the lack of a unified theoretical model, complexity of the solver configuration space, and multiple selection criteria. Therefore, it is desirable to have principled practitioner-centric strategies for identifying solver configuration(s) for solving large linear systems. The current dissertation presents a general practitioner-centric framework for (a) problem independent retrospective analysis, and (b) problem-specific predictive modeling of performance data. Our retrospective performance analysis methodology introduces new metrics such as area under performance-profile curve and conditional variance-based finetuning score that facilitate a robust comparative performance evaluation as well as parameter sensitivity analysis. We present results using this analysis approach on a number of popular preconditioned iterative solvers available in packages such as PETSc, Trilinos, Hypre, ILUPACK, and WSMP. The predictive modeling of performance data is an integral part of our multi-stage approach for solver recommendation. The key novelty of our approach lies in our modular learning based formulation that comprises of three sub problems: (a) solvability modeling, (b) performance modeling, and (c) performance optimization, which provides the flexibility to effectively target challenges such as software failure and multiobjective optimization. Our choice of a "solver trial" instance space represented in terms of the characteristics of the corresponding "linear system", "solver configuration" and their interactions, leads to a scalable and elegant formulation. Empirical evaluation of our approach on performance datasets associated with fairly large groups of solver configurations demonstrates that one can obtain high quality recommendations that are close to the ideal choices

    Deflation-based preconditioners for stochastic models of flow in porous media

    Get PDF
    Numerical analysis is a powerful mathematical tool that focuses on finding approximate solutions to mathematical problems where analytical methods fail to produce exact solutions. Many numerical methods have been developed and enhanced through the years for this purpose, across many classes, with some methods proven to be well-suited for solving certain equations. The key in numerical analysis is, then, choosing the right method or combination of methods for the problem at hand, with the least cost and highest accuracy possible (while maintaining efficiency). In this thesis, we consider the approximate solution of a class of 2-dimensional differential equations, with random coefficients. We aim, through using a combination of Krylov methods, preconditioners, and multigrid ideas to implement an algorithm that offers low cost and fast convergence for approximating solutions to these problems. In particular, we propose to use a "training" phase in the development of a preconditioner, where the first few linear systems in a sequence of similar problems are used to drive adaptation of the preconditioning strategy for subsequent problems. Results show that our algorithms are successful in effectively decreasing the cost of solving the model problem from the cost shown using a standard AMG-preconditioned CG method
    • …
    corecore