15 research outputs found

    Petrov-Galerkin formulation equivallent to the residual minimization method for finding an optimal test function

    Get PDF
    Numerical solutions of Partial Differential Equations with Finite Element Method have multiple applications in science and engineering. Several challenging problems require special stabilization methods to deliver accurate results of the numerical simulations. The advection-dominated diffusion problem is an example of such problems. They are employed to model pollution propagation in the atmosphere. Unstable numerical methods generate unphysical oscillations, and they make no physical sense. Obtaining accurate and stable numerical simulations is difficult, and the method of stabilization depends on the parameters of the partial differential equations. They require a deep knowledge of an expert in the field of numerical analysis. We propose a method to construct and train an artificial expert in stabilizing numerical simulations based on partial differential equations. We create a neural network-driven artificial intelligence that makes decisions about the method of stabilizing computer simulations. It will automatically stabilize difficult numerical simulations in a linear computational cost by generating the optimal test functions. These test functions can be utilized for building an unconditionally stable system of linear equations. The optimal test functions proposed by artificial intelligence will not depend on the right-hand side, and thus they may be utilized in a large class of PDE-based simulations with different forcing and boundary conditions. We test our method on the model one-dimensional advection-dominated diffusion problem

    Automatic stabilization of finite-element simulations using neural networks and hierarchical matrices

    Get PDF
    Petrov–Galerkin formulations with optimal test functions allow for the stabilization of finite element simulations. In particular, given a discrete trial space, the optimal test space induces a numerical scheme delivering the best approximation in terms of a problem-dependent energy norm. This ideal approach has two shortcomings: first, we need to explicitly know the set of optimal test functions; and second, the optimal test functions may have large supports inducing expensive dense linear systems. A concise proposal on how to overcome these shortcomings has been raised during the last decade by the Discontinuous Petrov–Galerkin (DPG) methodology. However, DPG has also some limitations and difficulties: the method requires ultraweak variational formulations, obtained through a hybridization process, which is not trivial to implement at the discrete level. Our motivation is to offer a simpler alternative for the case of parametric PDEs, which can be used with any variational formulation. Indeed, parametric families of PDEs are an example where it is worth investing some (offline) computational effort to obtain stabilized linear systems that can be solved efficiently in an online stage, for a given range of parameters. Therefore, as a remedy for the first shortcoming, we explicitly compute (offline) a function mapping any PDE parameter, to the matrix of coefficients of optimal test functions (in some basis expansion) associated with that PDE parameter. Next, as a remedy for the second shortcoming, we use the low-rank approximation to hierarchically compress the (non-square) matrix of coefficients of optimal test functions. In order to accelerate this process, we train a neural network to learn a critical bottleneck of the compression algorithm (for a given set of PDE parameters). When solving online the resulting (compressed) Petrov–Galerkin formulation, we employ a GMRES iterative solver with inexpensive matrix–vector multiplications thanks to the low-rank features of the compressed matrix. We perform experiments showing that the full online procedure is as fast as an (unstable) Galerkin approach. We illustrate our findings by means of 2D–3D Eriksson–Johnson problems, together with 2D Helmholtz equation

    Alternating directions parallel hybrid memory iGRM direct solver for non-stationary simulations

    Get PDF
    The three-dimensional isogeometric analysis (IGA-FEM) is a modern method for simulation. The idea is to utilize B-splines or NURBS basis functions for both computational domain descriptions and the engineering computations. Refined isogeometric analysis (rIGA) employs a mixture of patches of elements with B-spline basis functions, and C0C^0 separators between them. It enables a reduction of the computational cost of direct solvers. Both IGA and rIGA come with challenging sparse matrix structure, that is expensive to generate. In this paper, we show a hybrid parallelization method to reduce the computational cost of the integration phase using hybrid-memory parallel machines. The two-level parallelization includes the partitioning of the computational mesh into sub-domains on the first level (MPI), and loop parallelization on the second level (OpenMP). We show that hybrid parallelization of the integration reduces the contribution of this phase significantly. Thus, alternative algorithms for fast isogeometric integration are not necessary

    Explicit-in-Time Variational Formulations for Goal-Oriented Adaptivity

    Get PDF
    Goal-Oriented Adaptivity (GOA) is a powerful tool to accurately approximate physically relevant features of the solution of Partial Differential Equations (PDEs). It delivers optimal grids to solve challenging engineering problems. In time dependent problems, GOA requires to represent the error in the Quantity of Interest (QoI) as an integral over the whole space-time domain in order to reduce it via adaptive refinements. A full space-time variational formulation of the problem allows the aforementioned error representation. Thus, variational spacetime formulations for PDEs have been of great interest in the last decades, among other things, because they allow to develop mesh-adaptive algorithms. Since it is known that implicit time marching schemes have variational structure, they are often employed for GOA in time-domain problems. When coming to explicit-intime methods, these were introduced for Ordinary In this dissertation, we prove that the explicit Runge-Kutta (RK) methods can be expressed as discontinuous-in-time Petrov-Galerkin (dPG) methods for the linear advection-diffusion equation. We systematically build trial and test functions that, after exact integration in time, lead to one, two, and general stage explicit RK methods. This approach enables us to reproduce the existing time domain goal-oriented adaptive algorithms using explicit methods in time. Here, we employ the lowest order dPG formulation that we propose to recover the Forward Euler method and we derive an appropriate error representation. Then, we propose an explicit-in-time goal-oriented adaptive algorithm that performs local refinements in space. In terms of time domain adaptivity, we impose the Courant-Friedrichs-Lewy (CFL) condition to ensure the stability of the method. We provide some numerical results in one-dimensional (1D)+time for the diffusion and advection-diffusion equations to show the performance of the proposed algorithm. On the other hand, time-domain adaptive algorithms involve solving a dual problem that runs backwards in time. This process is, in general, computationally expensive in terms of memory storage. In this work, we dene a pseudo-dual problem that runs forwards in time. We also describe a forward-in-time adaptive algorithm that works for some specific problems. Although it is not possible to dene a general dual problem running forwards in time that provides information about future states, we provide numerical evidence via one-dimensional problems in space to illustrate the efficiency of our algorithm as well as its limitations. As a complementary method, we propose a hybrid algorithm that employs the classical backward-in-time dual problem once and then performs the adaptive process forwards in time. We also generalize a novel error representation for goal-oriented adaptivity using (unconventional) pseudo-dual problems in the context of frequency-domain wave-propagation problems to the time-dependent wave equation. We show via 1D+time numerical results that the upper bounds for the new error representation are sharper than the classical ones. Therefore, this new error representation can be used to design more efficient goal-oriented adaptive methodologies. Finally, as classical Galerkin methods may lead to instabilities in advection-dominated-diffusion problems and therefore, inappropriate refinements, we propose a novel stabilized discretization method, which we call Isogeometric Residual Minimization (iGRM) with direction splitting. This method combines the benefits resulting from Isogeometric Analysis (IGA), residual minimization, and Alternating Direction Implicit (ADI) methods. We employ second order ADI time integrator schemes, B-spline basis functions in space and, at each time step, we solve a stabilized mixed method based on residual minimization. We show that the resulting system of linear equations has a Kronecker product structure, which results in a linear computational cost of the direct solver, even using implicit time integration schemes together with the stabilized mixed formulation. We test our method in 2D and 3D+time advection-diffusion problems. The derivation of a time-domain goal-oriented strategy based on iGRM will be considered in future works

    Robust Variational Physics-Informed Neural Networks

    Full text link
    We introduce a Robust version of the Variational Physics-Informed Neural Networks (RVPINNs) to approximate the Partial Differential Equations (PDEs) solution. We start from a weak Petrov-Galerkin formulation of the problem, select a discrete test space, and define a quadratic loss functional as in VPINNs. Whereas in VPINNs the loss depends upon the selected basis functions of a given test space, herein we minimize a loss based on the residual in the discrete dual norm, which is independent of the test space's choice of test basis functions. We demonstrate that this loss is a reliable and efficient estimator of the true error in the energy norm. The proposed loss function requires computation of the Gram matrix inverse, similar to what occurs in traditional residual minimization methods. To validate our theoretical findings, we test the performance and robustness of our algorithm in several advection-dominated-diffusion problems in one spatial dimension. We conclude that RVPINNs is a robust method
    corecore