19 research outputs found

    Graph coarsening: From scientific computing to machine learning

    Full text link
    The general method of graph coarsening or graph reduction has been a remarkably useful and ubiquitous tool in scientific computing and it is now just starting to have a similar impact in machine learning. The goal of this paper is to take a broad look into coarsening techniques that have been successfully deployed in scientific computing and see how similar principles are finding their way in more recent applications related to machine learning. In scientific computing, coarsening plays a central role in algebraic multigrid methods as well as the related class of multilevel incomplete LU factorizations. In machine learning, graph coarsening goes under various names, e.g., graph downsampling or graph reduction. Its goal in most cases is to replace some original graph by one which has fewer nodes, but whose structure and characteristics are similar to those of the original graph. As will be seen, a common strategy in these methods is to rely on spectral properties to define the coarse graph

    Composable code generation for high order, compatible finite element methods

    Get PDF
    It has been widely recognised in the HPC communities across the world, that exploiting modern computer architectures, including exascale machines, to a full extent requires software commu- nities to adapt their algorithms. Computational methods with a high ratio of floating point op- erations to bandwidth are favorable. For solving partial differential equations, which can model many physical problems, high order finite element methods can calculate approximations with a high efficiency when a good solver is employed. Matrix-free algorithms solve the corresponding equations with a high arithmetic intensity. Vectorisation speeds up the operations by calculating one instruction on multiple data elements. Another recent development for solving partial differential are compatible (mimetic) finite ele- ment methods. In particular with application to geophysical flows, compatible discretisations ex- hibit desired numerical properties required for accurate approximations. Among others, this has been recognised by the UK Met office and their new dynamical core for weather and climate fore- casting is built on a compatible discretisation. Hybridisation has been proven to be an efficient solver for the corresponding equation systems, because it removes some inter-elemental coupling and localises expensive operations. This thesis combines the recent advances on vectorised, matrix-free, high order finite element methods in the HPC community on the one hand and hybridised, compatible discretisations in the geophysical community on the other. In previous work, a code generation framework has been developed to support the localised linear algebra required for hybridisation. First, the framework is adapted to support vectorisation and further, extended so that the equations can be solved fully matrix-free. Promising performance results are completing the thesis.Open Acces

    Adaptive Coarse Spaces for FETI-DP and BDDC Methods

    Get PDF
    Iterative substructuring methods are well suited for the parallel iterative solution of elliptic partial differential equations. These methods are based on subdividing the computational domain into smaller nonoverlapping subdomains and solving smaller problems on these subdomains. The solutions are then joined to a global solution in an iterative process. In case of a scalar diffusion equation or the equations of linear elasticity with a diffusion coefficient or Young modulus, respectively, constant on each subdomain, the numerical scalability of iterative substructuring methods can be proven. However, the convergence rate deteriorates significantly if the coefficient in the underlying partial differential equation (PDE) has a high contrast across and along the interface of the substructures. Even sophisticated scalings often do not lead to a good convergence rate. One possibility to enhance the convergence rate is to choose appropriate primal constraints. In the present work three different adaptive approaches to compute suitable primal constraints are discussed. First, we discuss an adaptive approach introduced by Dohrmann and Pechstein that draws on the operator P_D which is an important ingredient in the analysis of iterative substructuring methods like the dual-primal Finite Element Tearing and Interconnecting (FETI-DP) method and the closely related Balancing Domain Decomposition by Constraints (BDDC) method. We will also discuss variations of the method by Dohrmann and Pechstein introduced by Klawonn, Radtke, and Rheinbach. Secondly, we describe an adaptive method introduced by Mandel and Sousedík which is also based on the P_D-operator. Recently, a proof for the condition number bound in this method was provided by Klawonn, Radtke, and Rheinbach. Thirdly, we discuss an adaptive approach introduced by Klawonn, Radtke, and Rheinbach that enforces a Poincaré- or Korn-like inequality and an extension theorem. In all approaches generalized eigenvalue problems are used to compute a coarse space that leads to an upper bound of the condition number which is independent of the jumps in the coefficient and depend on an a priori prescribed tolerance. Proofs and numerical tests for all approaches are given in two dimensions. Finally, all approaches are compared

    Fast iterative solvers for Cahn-Hilliard problems

    Get PDF
    Otto-von-Guericke-Universität Magdeburg, Fakultät für Mathematik, Dissertation, 2016von M. Sc. Jessica BoschLiteraturverzeichnis: Seite [247]-25
    corecore