165 research outputs found

    Petrov-Galerkin formulation equivallent to the residual minimization method for finding an optimal test function

    Get PDF
    Numerical solutions of Partial Differential Equations with Finite Element Method have multiple applications in science and engineering. Several challenging problems require special stabilization methods to deliver accurate results of the numerical simulations. The advection-dominated diffusion problem is an example of such problems. They are employed to model pollution propagation in the atmosphere. Unstable numerical methods generate unphysical oscillations, and they make no physical sense. Obtaining accurate and stable numerical simulations is difficult, and the method of stabilization depends on the parameters of the partial differential equations. They require a deep knowledge of an expert in the field of numerical analysis. We propose a method to construct and train an artificial expert in stabilizing numerical simulations based on partial differential equations. We create a neural network-driven artificial intelligence that makes decisions about the method of stabilizing computer simulations. It will automatically stabilize difficult numerical simulations in a linear computational cost by generating the optimal test functions. These test functions can be utilized for building an unconditionally stable system of linear equations. The optimal test functions proposed by artificial intelligence will not depend on the right-hand side, and thus they may be utilized in a large class of PDE-based simulations with different forcing and boundary conditions. We test our method on the model one-dimensional advection-dominated diffusion problem

    Automatic stabilization of finite-element simulations using neural networks and hierarchical matrices

    Get PDF
    Petrov–Galerkin formulations with optimal test functions allow for the stabilization of finite element simulations. In particular, given a discrete trial space, the optimal test space induces a numerical scheme delivering the best approximation in terms of a problem-dependent energy norm. This ideal approach has two shortcomings: first, we need to explicitly know the set of optimal test functions; and second, the optimal test functions may have large supports inducing expensive dense linear systems. A concise proposal on how to overcome these shortcomings has been raised during the last decade by the Discontinuous Petrov–Galerkin (DPG) methodology. However, DPG has also some limitations and difficulties: the method requires ultraweak variational formulations, obtained through a hybridization process, which is not trivial to implement at the discrete level. Our motivation is to offer a simpler alternative for the case of parametric PDEs, which can be used with any variational formulation. Indeed, parametric families of PDEs are an example where it is worth investing some (offline) computational effort to obtain stabilized linear systems that can be solved efficiently in an online stage, for a given range of parameters. Therefore, as a remedy for the first shortcoming, we explicitly compute (offline) a function mapping any PDE parameter, to the matrix of coefficients of optimal test functions (in some basis expansion) associated with that PDE parameter. Next, as a remedy for the second shortcoming, we use the low-rank approximation to hierarchically compress the (non-square) matrix of coefficients of optimal test functions. In order to accelerate this process, we train a neural network to learn a critical bottleneck of the compression algorithm (for a given set of PDE parameters). When solving online the resulting (compressed) Petrov–Galerkin formulation, we employ a GMRES iterative solver with inexpensive matrix–vector multiplications thanks to the low-rank features of the compressed matrix. We perform experiments showing that the full online procedure is as fast as an (unstable) Galerkin approach. We illustrate our findings by means of 2D–3D Eriksson–Johnson problems, together with 2D Helmholtz equation

    Explicit-in-Time Variational Formulations for Goal-Oriented Adaptivity

    Get PDF
    Goal-Oriented Adaptivity (GOA) is a powerful tool to accurately approximate physically relevant features of the solution of Partial Differential Equations (PDEs). It delivers optimal grids to solve challenging engineering problems. In time dependent problems, GOA requires to represent the error in the Quantity of Interest (QoI) as an integral over the whole space-time domain in order to reduce it via adaptive refinements. A full space-time variational formulation of the problem allows the aforementioned error representation. Thus, variational spacetime formulations for PDEs have been of great interest in the last decades, among other things, because they allow to develop mesh-adaptive algorithms. Since it is known that implicit time marching schemes have variational structure, they are often employed for GOA in time-domain problems. When coming to explicit-intime methods, these were introduced for Ordinary In this dissertation, we prove that the explicit Runge-Kutta (RK) methods can be expressed as discontinuous-in-time Petrov-Galerkin (dPG) methods for the linear advection-diffusion equation. We systematically build trial and test functions that, after exact integration in time, lead to one, two, and general stage explicit RK methods. This approach enables us to reproduce the existing time domain goal-oriented adaptive algorithms using explicit methods in time. Here, we employ the lowest order dPG formulation that we propose to recover the Forward Euler method and we derive an appropriate error representation. Then, we propose an explicit-in-time goal-oriented adaptive algorithm that performs local refinements in space. In terms of time domain adaptivity, we impose the Courant-Friedrichs-Lewy (CFL) condition to ensure the stability of the method. We provide some numerical results in one-dimensional (1D)+time for the diffusion and advection-diffusion equations to show the performance of the proposed algorithm. On the other hand, time-domain adaptive algorithms involve solving a dual problem that runs backwards in time. This process is, in general, computationally expensive in terms of memory storage. In this work, we dene a pseudo-dual problem that runs forwards in time. We also describe a forward-in-time adaptive algorithm that works for some specific problems. Although it is not possible to dene a general dual problem running forwards in time that provides information about future states, we provide numerical evidence via one-dimensional problems in space to illustrate the efficiency of our algorithm as well as its limitations. As a complementary method, we propose a hybrid algorithm that employs the classical backward-in-time dual problem once and then performs the adaptive process forwards in time. We also generalize a novel error representation for goal-oriented adaptivity using (unconventional) pseudo-dual problems in the context of frequency-domain wave-propagation problems to the time-dependent wave equation. We show via 1D+time numerical results that the upper bounds for the new error representation are sharper than the classical ones. Therefore, this new error representation can be used to design more efficient goal-oriented adaptive methodologies. Finally, as classical Galerkin methods may lead to instabilities in advection-dominated-diffusion problems and therefore, inappropriate refinements, we propose a novel stabilized discretization method, which we call Isogeometric Residual Minimization (iGRM) with direction splitting. This method combines the benefits resulting from Isogeometric Analysis (IGA), residual minimization, and Alternating Direction Implicit (ADI) methods. We employ second order ADI time integrator schemes, B-spline basis functions in space and, at each time step, we solve a stabilized mixed method based on residual minimization. We show that the resulting system of linear equations has a Kronecker product structure, which results in a linear computational cost of the direct solver, even using implicit time integration schemes together with the stabilized mixed formulation. We test our method in 2D and 3D+time advection-diffusion problems. The derivation of a time-domain goal-oriented strategy based on iGRM will be considered in future works

    Space-time least-squares isogeometric method and efficient solver for parabolic problems

    Full text link
    In this paper, we propose a space-time least-squares isogeometric method to solve parabolic evolution problems, well suited for high-degree smooth splines in the space-time domain. We focus on the linear solver and its computational efficiency: thanks to the proposed formulation and to the tensor-product construction of space-time splines, we can design a preconditioner whose application requires the solution of a Sylvester-like equation, which is performed efficiently by the fast diagonalization method. The preconditioner is robust w.r.t. spline degree and mesh size. The computational time required for its application, for a serial execution, is almost proportional to the number of degrees-of-freedom and independent of the polynomial degree. The proposed approach is also well-suited for parallelization.Comment: 29 pages, 8 figure

    Computational methods in cardiovascular mechanics

    Full text link
    The introduction of computational models in cardiovascular sciences has been progressively bringing new and unique tools for the investigation of the physiopathology. Together with the dramatic improvement of imaging and measuring devices on one side, and of computational architectures on the other one, mathematical and numerical models have provided a new, clearly noninvasive, approach for understanding not only basic mechanisms but also patient-specific conditions, and for supporting the design and the development of new therapeutic options. The terminology in silico is, nowadays, commonly accepted for indicating this new source of knowledge added to traditional in vitro and in vivo investigations. The advantages of in silico methodologies are basically the low cost in terms of infrastructures and facilities, the reduced invasiveness and, in general, the intrinsic predictive capabilities based on the use of mathematical models. The disadvantages are generally identified in the distance between the real cases and their virtual counterpart required by the conceptual modeling that can be detrimental for the reliability of numerical simulations.Comment: 54 pages, Book Chapte
    corecore