6 research outputs found

    Distributed-memory Hierarchical Interpolative Factorization

    Full text link
    The hierarchical interpolative factorization (HIF) offers an efficient way for solving or preconditioning elliptic partial differential equations. By exploiting locality and low-rank properties of the operators, the HIF achieves quasi-linear complexity for factorizing the discrete positive definite elliptic operator and linear complexity for solving the associated linear system. In this paper, the distributed-memory HIF (DHIF) is introduced as a parallel and distributed-memory implementation of the HIF. The DHIF organizes the processes in a hierarchical structure and keep the communication as local as possible. The computation complexity is O(NlogNP)O\left(\frac{N\log N}{P}\right) and O(NP)O\left(\frac{N}{P}\right) for constructing and applying the DHIF, respectively, where NN is the size of the problem and PP is the number of processes. The communication complexity is O(Plog3P)α+O(N2/3P)βO\left(\sqrt{P}\log^3 P\right)\alpha + O\left(\frac{N^{2/3}}{\sqrt{P}}\right)\beta where α\alpha is the latency and β\beta is the inverse bandwidth. Extensive numerical examples are performed on the NERSC Edison system with up to 8192 processes. The numerical results agree with the complexity analysis and demonstrate the efficiency and scalability of the DHIF

    "Compress and eliminate" solver for symmetric positive definite sparse matrices

    Full text link
    We propose a new approximate factorization for solving linear systems with symmetric positive definite sparse matrices. In a nutshell the algorithm is to apply hierarchically block Gaussian elimination and additionally compress the fill-in. The systems that have efficient compression of the fill-in mostly arise from discretization of partial differential equations. We show that the resulting factorization can be used as an efficient preconditioner and compare the proposed approach with state-of-art direct and iterative solvers

    Variational training of neural network approximations of solution maps for physical models

    Full text link
    A novel solve-training framework is proposed to train neural network in representing low dimensional solution maps of physical models. Solve-training framework uses the neural network as the ansatz of the solution map and train the network variationally via loss functions from the underlying physical models. Solve-training framework avoids expensive data preparation in the traditional supervised training procedure, which prepares labels for input data, and still achieves effective representation of the solution map adapted to the input data distribution. The efficiency of solve-training framework is demonstrated through obtaining solutions maps for linear and nonlinear elliptic equations, and maps from potentials to ground states of linear and nonlinear Schr\"odinger equations

    A distributed-memory hierarchical solver for general sparse linear systems

    Full text link
    We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We present various numerical results to demonstrate the versatility and scalability of the parallel algorithm

    Recursively Preconditioned Hierarchical Interpolative Factorization for Elliptic Partial Differential Equations

    Full text link
    The hierarchical interpolative factorization for elliptic partial differential equations is a fast algorithm for approximate sparse matrix inversion in linear or quasilinear time. Its accuracy can degrade, however, when applied to strongly ill-conditioned problems. Here, we propose a simple modification that can significantly improve the accuracy at no additional asymptotic cost: applying a block Jacobi preconditioner before each level of skeletonization. This dramatically limits the impact of the underlying system conditioning and enables the construction of robust and highly efficient preconditioners even at quite modest compression tolerances. Numerical examples demonstrate the performance of the new approach

    Second Order Accurate Hierarchical Approximate Factorization of Sparse SPD Matrices

    Full text link
    We describe a second-order accurate approach to sparsifying the off-diagonal blocks in the hierarchical approximate factorizations of sparse symmetric positive definite matrices. The norm of the error made by the new approach depends quadratically, not linearly, on the error in the low-rank approximation of the given block. The analysis of the resulting two-level preconditioner shows that the preconditioner is second-order accurate as well. We incorporate the new approach into the recent Sparsified Nested Dissection algorithm [SIAM J. Matrix Anal. Appl., 41 (2020), pp. 715-746], and test it on a wide range of problems. The new approach halves the number of Conjugate Gradient iterations needed for convergence, with almost the same factorization complexity, improving the total runtimes of the algorithm. Our approach can be incorporated into other rank-structured methods for solving sparse linear systems.Comment: 26 page
    corecore