472 research outputs found

    A Survey on Intelligent Iterative Methods for Solving Sparse Linear Algebraic Equations

    Full text link
    Efficiently solving sparse linear algebraic equations is an important research topic of numerical simulation. Commonly used approaches include direct methods and iterative methods. Compared with the direct methods, the iterative methods have lower computational complexity and memory consumption, and are thus often used to solve large-scale sparse linear equations. However, there are numerous iterative methods, parameters and components needed to be carefully chosen, and an inappropriate combination may eventually lead to an inefficient solution process in practice. With the development of deep learning, intelligent iterative methods become popular in these years, which can intelligently make a sufficiently good combination, optimize the parameters and components in accordance with the properties of the input matrix. This survey then reviews these intelligent iterative methods. To be clearer, we shall divide our discussion into three aspects: a method aspect, a component aspect and a parameter aspect. Moreover, we summarize the existing work and propose potential research directions that may deserve a deep investigation

    Learning Relaxation for Multigrid

    Full text link
    During the last decade, Neural Networks (NNs) have proved to be extremely effective tools in many fields of engineering, including autonomous vehicles, medical diagnosis and search engines, and even in art creation. Indeed, NNs often decisively outperform traditional algorithms. One area that is only recently attracting significant interest is using NNs for designing numerical solvers, particularly for discretized partial differential equations. Several recent papers have considered employing NNs for developing multigrid methods, which are a leading computational tool for solving discretized partial differential equations and other sparse-matrix problems. We extend these new ideas, focusing on so-called relaxation operators (also called smoothers), which are an important component of the multigrid algorithm that has not yet received much attention in this context. We explore an approach for using NNs to learn relaxation parameters for an ensemble of diffusion operators with random coefficients, for Jacobi type smoothers and for 4Color GaussSeidel smoothers. The latter yield exceptionally efficient and easy to parallelize Successive Over Relaxation (SOR) smoothers. Moreover, this work demonstrates that learning relaxation parameters on relatively small grids using a two-grid method and Gelfand's formula as a loss function can be implemented easily. These methods efficiently produce nearly-optimal parameters, thereby significantly improving the convergence rate of multigrid algorithms on large grids.Comment: This research was carried out under the supervision of Prof. Irad Yavneh and Prof. Ron Kimmel. XeLate
    • …
    corecore