291 research outputs found

    Hierarchical matrix arithmetic with accumulated updates

    Full text link
    Hierarchical matrices can be used to construct efficient preconditioners for partial differential and integral equations by taking advantage of low-rank structures in triangular factorizations and inverses of the corresponding stiffness matrices. The setup phase of these preconditioners relies heavily on low-rank updates that are responsible for a large part of the algorithm's total run-time, particularly for matrices resulting from three-dimensional problems. This article presents a new algorithm that significantly reduces the number of low-rank updates and can reduce the setup time by 50 percent or more

    Adaptive compression of large vectors

    Full text link
    Numerical algorithms for elliptic partial differential equations frequently employ error estimators and adaptive mesh refinement strategies in order to reduce the computational cost. We can extend these techniques to general vectors by splitting the vectors into a hierarchically organized partition of subsets and using appropriate bases to represent the corresponding parts of the vectors. This leads to the concept of \emph{hierarchical vectors}. A hierarchical vector with mm subsets and bases of rank kk requires mkmk units of storage, and typical operations like the evaluation of norms and inner products or linear updates can be carried out in O(mk2)\mathcal{O}(mk^2) operations. Using an auxiliary basis, the product of a hierarchical vector and an H2\mathcal{H}^2-matrix can also be computed in O(mk2)\mathcal{O}(mk^2) operations, and if the result admits an approximation with m~\widetilde m subsets in the original basis, this approximation can be obtained in O((m+m~)k2)\mathcal{O}((m+\widetilde m)k^2) operations. Since it is possible to compute the corresponding approximation error exactly, sophisticated error control strategies can be used to ensure the optimal compression. Possible applications of hierarchical vectors include the approximation of eigenvectors and the solution of time-dependent problems with moving local irregularities

    Complexity estimates for triangular hierarchical matrix algorithms

    Full text link
    Triangular factorizations are an important tool for solving integral equations and partial differential equations with hierarchical matrices (H\mathcal{H}-matrices). Experiments show that using an H\mathcal{H}-matrix LR factorization to solve a system of linear questions is superior to direct inversion both with respect to accuracy and efficiency, but so far theoretical estimates quantifying these advantages were missing. Due to a lack of symmetry in H\mathcal{H}-matrix algorithms, we cannot hope to prove that the LR factorization takes one third of the operations of the inversion or the matrix multiplication, as in standard linear algebra. We can, however, prove that the LR factorization together with two other operations of similar complexity, i.e., the inversion and multiplication of triangular matrices, requires not more operations than the matrix multiplication. We can complete the estimates by proving an improved upper bound for the complexity of the matrix multiplication, designed for recently introduced variants of classical H\mathcal{H}-matrices

    Hybrid matrix compression for high-frequency problems

    Full text link
    Boundary element methods for the Helmholtz equation lead to large dense matrices that can only be handled if efficient compression techniques are used. Directional compression techniques can reach good compression rates even for high-frequency problems. Currently there are two approaches to directional compression: analytic methods approximate the kernel function, while algebraic methods approximate submatrices. Analytic methods are quite fast and proven to be robust, while algebraic methods yield significantly better compression rates. We present a hybrid method that combines the speed and reliability of analytic methods with the good compression rates of algebraic methods

    Approximation of integral operators by Green quadrature and nested cross approximation

    Full text link
    We present a fast algorithm that constructs a data-sparse approximation of matrices arising in the context of integral equation methods for elliptic partial differential equations. The new algorithm uses Green's representation formula in combination with quadrature to obtain a first approximation of the kernel function and then applies nested cross approximation to obtain a more efficient representation. The resulting H2\mathcal{H}^2-matrix representation requires O(nk)\mathcal{O}(n k) units of storage for an n×nn\times n matrix, where kk depends on the prescribed accuracy

    Efficient arithmetic operations for rank-structured matrices based on hierarchical low-rank updates

    Full text link
    Many matrices appearing in numerical methods for partial differential equations and integral equations are rank-structured, i.e., they contain submatrices that can be approximated by matrices of low rank. A relatively general class of rank-structured matrices are H2\mathcal{H}^2-matrices: they can reach the optimal order of complexity, but are still general enough for a large number of practical applications. We consider algorithms for performing algebraic operations with H2\mathcal{H}^2-matrices, i.e., for approximating the matrix product, inverse or factorizations in almost linear complexity. The new approach is based on local low-rank updates that can be performed in linear complexity. These updates can be combined with a recursive procedure to approximate the product of two H2\mathcal{H}^2-matrices, and these products can be used to approximate the matrix inverse and the LR or Cholesky factorization. Numerical experiments indicate that the new method leads to preconditioners that require O(n)\mathcal{O}(n) units of storage, can be evaluated in O(n)\mathcal{O}(n) operations, and take O(nlogn)\mathcal{O}(n \log n) operations to set up

    GCA-H2\mathcal{H}^2 matrix compression for electrostatic simulations

    Full text link
    We consider a compression method for boundary element matrices arising in the context of the computation of electrostatic fields. Green cross approximation combines an analytic approximation of the kernel function based on Green's representation formula and quadrature with an algebraic cross approximation scheme in order to obtain both the robustness of analytic methods and the efficiency of algebraic ones. One particularly attractive property of the new method is that it is well-suited for acceleration via general-purpose graphics processors (GPUs)

    Approximation of boundary element matrices using GPGPUs and nested cross approximation

    Full text link
    The efficiency of boundary element methods depends crucially on the time required for setting up the stiffness matrix. The far-field part of the matrix can be approximated by compression schemes like the fast multipole method or H\mathcal{H}-matrix techniques. The near-field part is typically approximated by special quadrature rules like the Sauter-Schwab technique that can handle the singular integrals appearing in the diagonal and near-diagonal matrix elements. Since computing one element of the matrix requires only a small amount of data but a fairly large number of operations, we propose to use general-purpose graphics processing units (GPGPUs) to handle vectorizable portions of the computation: near-field computations are ideally suited for vectorization and can therefore be handled very well by GPGPUs. Modern far-field compression schemes can be split into a small adaptive portion that exhibits divergent control flows, and should therefore be handled by the CPU, and a vectorizable portion that can again be sent to GPGPUs. We propose a hybrid algorithm that splits the computation into tasks for CPUs and GPGPUs. Our method presented in this article is able to reduce the setup time of boundary integral operators by a significant factor of 19-30 for both the Laplace and the Helmholtz equation in 3D when using two consumer GPGPUs compared to a quad-core CPU

    Approximation of the high-frequency Helmholtz kernel by nested directional interpolation

    Full text link
    We present and analyze an approximation scheme for a class of highly oscillatory kernel functions, taking the 2D and 3D Helmholtz kernels as examples. The scheme is based on polynomial interpolation combined with suitable pre- and postmultiplication by plane waves. It is shown to converge exponentially in the polynomial degree and supports multilevel approximation techniques. Our convergence analysis may be employed to establish exponential convergence of certain classes of fast methods for discretizations of the Helmholtz integral operator that feature polylogarithmic-linear complexity

    An analysis of a butterfly algorithm

    Full text link
    Butterfly algorithms are an effective multilevel technique to compress discretizations of integral operators with highly oscillatory kernel functions. The particular version of the butterfly algorithm considered here realizes the transfer between levels by Chebyshev interpolation. We present a refinement of the analysis that improves the stability estimates underlying the error bounds
    corecore