1,260 research outputs found

    Refined Isogeometric Analysis for fluid mechanics and electromagnetism

    Get PDF
    Starting from a highly continuous isogeometric analysis discretization, we introduce hyperplanes that partition the domain into subdomains and reduce the continuity of the discretization spaces at these hyperplanes. As the continuity is reduced, the number of degrees of freedom in the system grows. The resulting discretization spaces are finer than standard maximal continuity IGA spaces. Despite the increase in the number of degrees of freedom, these finer spaces deliver simulation results faster with direct solvers than both traditional finite element and isogeometric analysis for meshes with a fixed number of elements. In this work, we analyze the impact of continuity reduction on the number of Floating Point Operations (FLOPs) and computational times required to solve fluid flow and electromagnetic problems with structured meshes and uniform polynomial orders. Theoretical estimates show that for sufficiently large grids, an optimal continuity reduction decreases the computational cost by a factor of . Numerical results confirm these theoretical estimates. In a 2D mesh with one million elements and polynomial order equal to five, the discretization including an optimal continuity pattern allows to solve the vector electric field, the scalar magnetic field, and the fluid flow problems an order of magnitude faster than when using a highly continuous IGA discretization. 3D numerical results exhibit more moderate savings due to the limited mesh sizes considered in this work

    Refined isogeometric analysis for generalized Hermitian eigenproblems

    Get PDF
    We use refined isogeometric analysis (rIGA) to solve generalized Hermitian eigenproblems (Ku = λMu). rIGA conserves the desirable properties of maximum-continuity isogeometric analysis (IGA) while it reduces the solution cost by adding zero-continuity basis functions, which decrease the matrix connectivity. As a result, rIGA enriches the approximation space and reduces the interconnection between degrees of freedom. We compare computational costs of rIGA versus those of IGA when employing a Lanczos eigensolver with a shift-and-invert spectral transformation. When all eigenpairs within a given interval [λ_s,λ_e] are of interest, we select several shifts σ_k ∈ [λ_s,λ_e] using a spectrum slicing technique. For each shift σ_k, the factorization cost of the spectral transformation matrix K − σ_k M controls the total computational cost of the eigensolution. Several multiplications of the operator matrix (K − σ_k M)^−1 M by vectors follow this factorization. Let p be the polynomial degree of the basis functions and assume that IGA has maximum continuity of p−1. When using rIGA, we introduce C^0 separators at certain element interfaces to minimize the factorization cost. For this setup, our theoretical estimates predict computational savings to compute a fixed number of eigenpairs of up to O(p^2) in the asymptotic regime, that is, large problem sizes. Yet, our numerical tests show that for moderate-size eigenproblems, the total observed computational cost reduction is O(p). In addition, rIGA improves the accuracy of every eigenpair of the first N_0 eigenvalues and eigenfunctions, where N_0 is the total number of modes of the original maximum-continuity IGA discretization

    Goal-oriented self-adaptive hp finite element simulation of 3D DC borehole resistivity simulations

    Get PDF
    In this paper we present a goal-oriented self-adaptive hp Finite Element Method (hp-FEM) with shared data structures and a parallel multi-frontal direct solver. The algorithm automatically generates (without any user interaction) a sequence of meshes delivering exponential convergence of a prescribed quantity of interest with respect to the number of degrees of freedom. The sequence of meshes is generated from a given initial mesh, by performing h (breaking elements into smaller elements), p (adjusting polynomial orders of approximation) or hp (both) refinements on the finite elements. The new parallel implementation utilizes a computational mesh shared between multiple processors. All computational algorithms, including automatic hp goal-oriented adaptivity and the solver work fully in parallel. We describe the parallel self-adaptive hp-FEM algorithm with shared computational domain, as well as its efficiency measurements. We apply the methodology described to the three-dimensional simulation of the borehole resistivity measurement of direct current through casing in the presence of invasion. © 2011 Published by Elsevier Ltd

    Direct solvers performance on h-adapted grids

    Get PDF
    We analyse the performance of direct solvers when applied to a system of linear equations arising from an h-adapted, <sup>C0</sup> finite element space. Theoretical estimates are derived for typical h-refinement patterns arising as a result of a point, edge, or face singularity as well as boundary layers. They are based on the elimination trees constructed specifically for the considered grids. Theoretical estimates are compared with experiments performed with MUMPS using the nested-dissection algorithm for construction of the elimination tree from METIS library. The numerical experiments provide the same performance for the cases where our trees are identical with those constructed by the nested-dissection algorithm, and worse performance for some cases where our trees are different. We also present numerical experiments for the cases with mixed singularities, where how to construct optimal elimination trees is unknown. In all analysed cases, the use of h-adaptive grids significantly reduces the cost of the direct solver algorithm per unknown as compared to uniform grids. The theoretical estimates predict and the experimental data confirm that the computational complexity is linear for various refinement patterns. In most cases, the cost of the direct solver per unknown is lower when employing anisotropic refinements as opposed to isotropic ones

    Refined Isogeometric Analysis for a preconditioned conjugate gradient solver

    Get PDF
    Starting from a highly continuous Isogeometric Analysis (IGA) discretization, refined Isogeometric Analysis (rIGA) introduces C 0 hyperplanes that act as separators for the direct LU factorization solver. As a result, the total computational cost required to solve the corresponding system of equations using a direct LU factorization solver dramatically reduces (up to a factor of 55) (Garcia et al., 2017). At the same time, rIGA enriches the IGA spaces, thus improving the best approximation error. In this work, we extend the complexity analysis of rIGA to the case of iterative solvers. We build an iterative solver as follows: we first construct the Schur complements using a direct solver over small subdomains (macro-elements). We then assemble those Schur complements into a global skeleton system. Subsequently, we solve this system iteratively using Conjugate Gradients (CG) with an incomplete LU (ILU) preconditioner. For a 2D Poisson model problem with a structured mesh and a uniform polynomial degree of approximation, rIGA achieves moderate savings with respect to IGA in terms of the number of Floating Point Operations (FLOPs) and computational time (in seconds) required to solve the resulting system of linear equations. For instance, for a mesh with four million elements and polynomial degree p=3, the iterative solver is approximately 2.6 times faster (in time) when applied to the rIGA system than to the IGA one. These savings occur because the skeleton rIGA system contains fewer non-zero entries than the IGA one. The opposite situation occurs for 3D problems, and as a result, 3D rIGA discretizations provide no gains with respect to their IGA counterparts when considering iterative solvers

    Exploiting the Kronecker product structure of φ−functions in exponential integrators

    Get PDF
    Exponential time integrators are well-established discretization methods for time semilinear systems of ordinary differential equations. These methods use (Formula presented.) functions, which are matrix functions related to the exponential. This work introduces an algorithm to speed up the computation of the (Formula presented.) function action over vectors for two-dimensional (2D) matrices expressed as a Kronecker sum. For that, we present an auxiliary exponential-related matrix function that we express using Kronecker products of one-dimensional matrices. We exploit state-of-the-art implementations of (Formula presented.) functions to compute this auxiliary function's action and then recover the original (Formula presented.) action by solving a Sylvester equation system. Our approach allows us to save memory and solve exponential integrators of 2D+time problems in a fraction of the time traditional methods need. We analyze the method's performance considering different linear operators and with the nonlinear 2D+time Allen–Cahn equation

    Variational Formulations for Explicit Runge-Kutta Methods

    Get PDF
    Variational space-time formulations for partial di fferential equations have been of great interest in the last decades, among other things, because they allow to develop mesh-adaptive algorithms. Since it is known that implicit time marching schemes have variational structure, they are often employed for adaptivity. Previously, Galerkin formulations of explicit methods were introduced for ordinary di fferential equations employing speci fic inexact quadrature rules. In this work, we prove that the explicit Runge-Kutta methods can be expressed as discontinuous-in-time Petrov-Galerkin methods for the linear di ffusion equation. We systematically build trial and test functions that, after exact integration in time, lead to one, two, and general stage explicit Runge-Kutta methods. This approach enables us to reproduce the existing time-domain (goal-oriented) adaptive algorithms using explicit methods in time
    • …
    corecore