1,744 research outputs found

    Butterfly Factorization

    Full text link
    The paper introduces the butterfly factorization as a data-sparse approximation for the matrices that satisfy a complementary low-rank property. The factorization can be constructed efficiently if either fast algorithms for applying the matrix and its adjoint are available or the entries of the matrix can be sampled individually. For an N×NN \times N matrix, the resulting factorization is a product of O(logN)O(\log N) sparse matrices, each with O(N)O(N) non-zero entries. Hence, it can be applied rapidly in O(NlogN)O(N\log N) operations. Numerical results are provided to demonstrate the effectiveness of the butterfly factorization and its construction algorithms

    H-matrix accelerated second moment analysis for potentials with rough correlation

    Get PDF
    We consider the efficient solution of partial differential equationsfor strongly elliptic operators with constant coefficients and stochastic Dirichlet data by the boundary integral equation method. The computation of the solution's two-point correlation is well understood if the two-point correlation of the Dirichlet data is known and sufficiently smooth.Unfortunately, the problem becomes much more involved in case of rough data. We will show that the concept of the H-matrix arithmetic provides a powerful tool to cope with this problem. By employing a parametric surface representation, we end up with an H-matrix arithmetic based on balanced cluster trees. This considerably simplifies the implementation and improves the performance of the H-matrix arithmetic. Numerical experiments are provided to validate and quantify the presented methods and algorithms

    Hierarchical interpolative factorization for elliptic operators: integral equations

    Full text link
    This paper introduces the hierarchical interpolative factorization for integral equations (HIF-IE) associated with elliptic problems in two and three dimensions. This factorization takes the form of an approximate generalized LU decomposition that permits the efficient application of the discretized operator and its inverse. HIF-IE is based on the recursive skeletonization algorithm but incorporates a novel combination of two key features: (1) a matrix factorization framework for sparsifying structured dense matrices and (2) a recursive dimensional reduction strategy to decrease the cost. Thus, higher-dimensional problems are effectively mapped to one dimension, and we conjecture that constructing, applying, and inverting the factorization all have linear or quasilinear complexity. Numerical experiments support this claim and further demonstrate the performance of our algorithm as a generalized fast multipole method, direct solver, and preconditioner. HIF-IE is compatible with geometric adaptivity and can handle both boundary and volume problems. MATLAB codes are freely available.Comment: 39 pages, 14 figures, 13 tables; to appear, Comm. Pure Appl. Mat

    Supporting GENP with Random Multipliers

    Full text link
    We prove that standard Gaussian random multipliers are expected to stabilize numerically both Gaussian elimination with no pivoting and block Gaussian elimination. Our tests show similar results where we applied circulant random multipliers instead of Gaussian ones.Comment: 14 page

    Hierarchical interpolative factorization for elliptic operators: differential equations

    Full text link
    This paper introduces the hierarchical interpolative factorization for elliptic partial differential equations (HIF-DE) in two (2D) and three dimensions (3D). This factorization takes the form of an approximate generalized LU/LDL decomposition that facilitates the efficient inversion of the discretized operator. HIF-DE is based on the multifrontal method but uses skeletonization on the separator fronts to sparsify the dense frontal matrices and thus reduce the cost. We conjecture that this strategy yields linear complexity in 2D and quasilinear complexity in 3D. Estimated linear complexity in 3D can be achieved by skeletonizing the compressed fronts themselves, which amounts geometrically to a recursive dimensional reduction scheme. Numerical experiments support our claims and further demonstrate the performance of our algorithm as a fast direct solver and preconditioner. MATLAB codes are freely available.Comment: 37 pages, 13 figures, 12 tables; to appear, Comm. Pure Appl. Math. arXiv admin note: substantial text overlap with arXiv:1307.266

    Computing the eigenvalues of symmetric H2-matrices by slicing the spectrum

    Get PDF
    The computation of eigenvalues of large-scale matrices arising from finite element discretizations has gained significant interest in the last decade. Here we present a new algorithm based on slicing the spectrum that takes advantage of the rank structure of resolvent matrices in order to compute m eigenvalues of the generalized symmetric eigenvalue problem in O(nmlogαn)\mathcal{O}(n m \log^\alpha n) operations, where α>0\alpha>0 is a small constant

    Preconditioning For Matrix Computation

    Full text link
    Preconditioning is a classical subject of numerical solution of linear systems of equations. The goal is to turn a linear system into another one which is easier to solve. The two central subjects of numerical matrix computations are LIN-SOLVE, that is, the solution of linear systems of equations and EIGEN-SOLVE, that is, the approximation of the eigenvalues and eigenvectors of a matrix. We focus on the former subject of LIN-SOLVE and show an application to EIGEN-SOLVE. We achieve our goal by applying randomized additive and multiplicative preconditioning. We facilitate the numerical solution by decreasing the condition of the coefficient matrix of the linear system, which enables reliable numerical solution of LIN-SOLVE. After the introduction in the Chapter 1 we recall the definitions and auxiliary results in Chapter 2. Then in Chapter 3 we precondition linear systems of equations solved at every iteration of the Inverse Power Method applied to EIGEN-SOLVE. These systems are ill conditioned, that is, have large condition numbers, and we decrease them by applying randomized additive preconditioning. This is our first subject. Our second subject is randomized multiplicative preconditioning for LIN-SOLVE. In this way we support application of GENP, that is, Gaussian elimination with no pivoting, and block Gaussian elimination. We prove that the proposed preconditioning methods are efficient when we apply Gaussian random matrices as preconditioners. We confirm these results with our extensive numerical tests. The tests also show that the same methods work as efficiently on the average when we use random structured, in particular circulant, preconditioners instead, but we show both formally and experimentally that these preconditioners fail in the case of LIN-SOLVE for the unitary matrix of discreet Fourier transform, for which Gaussian preconditioners work efficiently
    corecore