4 research outputs found

    Physical Activation Functions (PAFs): An Approach for More Efficient Induction of Physics into Physics-Informed Neural Networks (PINNs)

    Full text link
    In recent years, the gap between Deep Learning (DL) methods and analytical or numerical approaches in scientific computing is tried to be filled by the evolution of Physics-Informed Neural Networks (PINNs). However, still, there are many complications in the training of PINNs and optimal interleaving of physical models. Here, we introduced the concept of Physical Activation Functions (PAFs). This concept offers that instead of using general activation functions (AFs) such as ReLU, tanh, and sigmoid for all the neurons, one can use generic AFs that their mathematical expression is inherited from the physical laws of the investigating phenomena. The formula of PAFs may be inspired by the terms in the analytical solution of the problem. We showed that the PAFs can be inspired by any mathematical formula related to the investigating phenomena such as the initial or boundary conditions of the PDE system. We validated the advantages of PAFs for several PDEs including the harmonic oscillations, Burgers, Advection-Convection equation, and the heterogeneous diffusion equations. The main advantage of PAFs was in the more efficient constraining and interleaving of PINNs with the investigating physical phenomena and their underlying mathematical models. This added constraint significantly improved the predictions of PINNs for the testing data that was out-of-training distribution. Furthermore, the application of PAFs reduced the size of the PINNs up to 75% in different cases. Also, the value of loss terms was reduced by 1 to 2 orders of magnitude in some cases which is noteworthy for upgrading the training of the PINNs. The iterations required for finding the optimum values were also significantly reduced. It is concluded that using the PAFs helps in generating PINNs with less complexity and much more validity for longer ranges of prediction.Comment: 26 pages, 9 figure

    Model Order Reduction for Nonlinear and Time-Dependent Parametric Optimal Flow Control Problems

    Get PDF
    The goal of this thesis is to provide an overview of the latest advances on reduced order methods for parametric optimal control governed by partial differential equations. Historically, parametric optimal control problems are a powerful and elegant mathematical framework to fill the gap between collected data and model equations to make numerical simulations more reliable and accurate for forecasting purposes. For this reason, parametric optimal control problems are widespread in many research and industrial fields. However, their computational complexity limits their actual applicability, most of all in a parametric nonlinear and time-dependent framework. Moreover, in the forecasting setting, many simulations are required to have a more comprehensive knowledge of very complex systems and this should happen in a small amount of time. In this context, reduced order methods might represent an asset to tackle this issue. Thus, we employed space-time reduced techniques to deal with a wide range of equations. We propose a space-time proper orthogonal decomposition for nonlinear (and linear) time-dependent (and steady) problems and a space-time Greedy with a new error estimation for parabolic governing equations. First of all, we validate the proposed techniques through many examples, from the more academic ones to a test case of interest in coastal management exploiting the Shallow Waters Equations model. Then, we will focus on the great potential of optimal control techniques in several advanced applications. As a first example, we will show some deterministic and stochastic environmental applications, adapting the reduced model to the latter case to reach even faster numerical simulations. Another application concerns the role of optimal control in steering bifurcating phenomena arising in nonlinear governing equations. Finally, we propose a neural network-based paradigm to deal with the optimality system for parametric prediction
    corecore