11 research outputs found

    Constrained Deep Learning-based Model Predictive Control with Improved Constraint Satisfaction

    Full text link
    Machine learning technique can help reduce computational cost of model predictive control (MPC). In this paper, a constrained deep neural networks design is proposed to learn and construct MPC policies for nonlinear input affine dynamic systems. Using constrained training of neural networks helps enforce MPC constraints effectively. We show the asymptotic stability of the learned policies. Additionally, different data sampling strategies are compared in terms of their generalization errors on the learned policy. Furthermore, probabilistic feasibility and optimality guarantees are provided for the learned control policy. The proposed algorithm is implemented on a rotary inverted pendulum experimentally and control performance is demonstrated and compared with the exact MPC and the normally trained learning MPC. The results show that the proposed algorithm improves constraint satisfaction while preserves computational efficiency of the learned policy

    Physics-Informed Neural Networks for Minimising Worst-Case Violations in DC Optimal Power Flow

    Full text link
    Physics-informed neural networks exploit the existing models of the underlying physical systems to generate higher accuracy results with fewer data. Such approaches can help drastically reduce the computation time and generate a good estimate of computationally intensive processes in power systems, such as dynamic security assessment or optimal power flow. Combined with the extraction of worst-case guarantees for the neural network performance, such neural networks can be applied in safety-critical applications in power systems and build a high level of trust among power system operators. This paper takes the first step and applies, for the first time to our knowledge, Physics-Informed Neural Networks with Worst-Case Guarantees for the DC Optimal Power Flow problem. We look for guarantees related to (i) maximum constraint violations, (ii) maximum distance between predicted and optimal decision variables, and (iii) maximum sub-optimality in the entire input domain. In a range of PGLib-OPF networks, we demonstrate how physics-informed neural networks can be supplied with worst-case guarantees and how they can lead to reduced worst-case violations compared with conventional neural networks.Comment: The code to reproduce all simulation results is available online in https://github.com/RahulNellikkath/Physics-Informed-Neural-Network-for-DC-OP

    On the Use of Neural Networks for Full Waveform Inversion

    Full text link
    Neural networks have recently gained attention in solving inverse problems. One prominent methodology are Physics-Informed Neural Networks (PINNs) which can solve both forward and inverse problems. In the paper at hand, full waveform inversion is the considered inverse problem. The performance of PINNs is compared against classical adjoint optimization, focusing on three key aspects: the forward-solver, the neural network Ansatz for the inverse field, and the sensitivity computation for the gradient-based minimization. Starting from PINNs, each of these key aspects is adapted individually until the classical adjoint optimization emerges. It is shown that it is beneficial to use the neural network only for the discretization of the unknown material field, where the neural network produces reconstructions without oscillatory artifacts as typically encountered in classical full waveform inversion approaches. Due to this finding, a hybrid approach is proposed. It exploits both the efficient gradient computation with the continuous adjoint method as well as the neural network Ansatz for the unknown material field. This new hybrid approach outperforms Physics-Informed Neural Networks and the classical adjoint optimization in settings of two and three-dimensional examples

    Physics-informed neural networks with hard constraints for inverse design

    Full text link
    Inverse design arises in a variety of areas in engineering such as acoustic, mechanics, thermal/electronic transport, electromagnetism, and optics. Topology optimization is a major form of inverse design, where we optimize a designed geometry to achieve targeted properties and the geometry is parameterized by a density function. This optimization is challenging, because it has a very high dimensionality and is usually constrained by partial differential equations (PDEs) and additional inequalities. Here, we propose a new deep learning method -- physics-informed neural networks with hard constraints (hPINNs) -- for solving topology optimization. hPINN leverages the recent development of PINNs for solving PDEs, and thus does not rely on any numerical PDE solver. However, all the constraints in PINNs are soft constraints, and hence we impose hard constraints by using the penalty method and the augmented Lagrangian method. We demonstrate the effectiveness of hPINN for a holography problem in optics and a fluid problem of Stokes flow. We achieve the same objective as conventional PDE-constrained optimization methods based on adjoint methods and numerical PDE solvers, but find that the design obtained from hPINN is often simpler and smoother for problems whose solution is not unique. Moreover, the implementation of inverse design with hPINN can be easier than that of conventional methods

    Structure-preserving neural networks

    Get PDF
    We develop a method to learn physical systems from data that employs feedforward neural networks and whose predictions comply with the first and second principles of thermodynamics. The method employs a minimum amount of data by enforcing the metriplectic structure of dissipative Hamiltonian systems in the form of the so-called General Equation for the Non-Equilibrium Reversible-Irreversible Coupling, GENERIC (Öttinger and Grmela (1997) [36]). The method does not need to enforce any kind of balance equation, and thus no previous knowledge on the nature of the system is needed. Conservation of energy and dissipation of entropy in the prediction of previously unseen situations arise as a natural by-product of the structure of the method. Examples of the performance of the method are shown that comprise conservative as well as dissipative systems, discrete as well as continuous ones

    A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians

    Full text link
    A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. The algorithmic structure of the proposed method is based on a step decomposition strategy that is known in the literature to be widely effective in practice, wherein each search direction is computed as the sum of a normal step (toward linearized feasibility) and a tangential step (toward objective decrease in the null space of the constraint Jacobian). However, the proposed method is unique from others in the literature in that it both allows the use of stochastic objective gradient estimates and possesses convergence guarantees even in the setting in which the constraint Jacobians may be rank deficient. The results of numerical experiments demonstrate that the algorithm offers superior performance when compared to popular alternatives
    corecore