7 research outputs found

    Accelerated primal-dual methods with enlarged step sizes and operator learning for nonsmooth optimal control problems

    Full text link
    We consider a general class of nonsmooth optimal control problems with partial differential equation (PDE) constraints, which are very challenging due to its nonsmooth objective functionals and the resulting high-dimensional and ill-conditioned systems after discretization. We focus on the application of a primal-dual method, with which different types of variables can be treated individually and thus its main computation at each iteration only requires solving two PDEs. Our target is to accelerate the primal-dual method with either larger step sizes or operator learning techniques. For the accelerated primal-dual method with larger step sizes, its convergence can be still proved rigorously while it numerically accelerates the original primal-dual method in a simple and universal way. For the operator learning acceleration, we construct deep neural network surrogate models for the involved PDEs. Once a neural operator is learned, solving a PDE requires only a forward pass of the neural network, and the computational cost is thus substantially reduced. The accelerated primal-dual method with operator learning is mesh-free, numerically efficient, and scalable to different types of PDEs. The acceleration effectiveness of these two techniques is promisingly validated by some preliminary numerical results

    A numerical approach to the optimal control of thermally convective flows

    Full text link
    The optimal control of thermally convective flows is usually modeled by an optimization problem with constraints of Boussinesq equations that consist of the Navier-Stokes equation and an advection-diffusion equation. This optimal control problem is challenging from both theoretical analysis and algorithmic design perspectives. For example, the nonlinearity and coupling of fluid flows and energy transports prevent direct applications of gradient type algorithms in practice. In this paper, we propose an efficient numerical method to solve this problem based on the operator splitting and optimization techniques. In particular, we employ the Marchuk-Yanenko method leveraged by the L2−L^2-projection for the time discretization of the Boussinesq equations so that the Boussinesq equations are decomposed into some easier linear equations without any difficulty in deriving the corresponding adjoint system. Consequently, at each iteration, four easy linear advection-diffusion equations and two degenerated Stokes equations at each time step are needed to be solved for computing a gradient. Then, we apply the Bercovier-Pironneau finite element method for space discretization, and design a BFGS type algorithm for solving the fully discretized optimal control problem. We look into the structure of the problem, and design a meticulous strategy to seek step sizes for the BFGS efficiently. Efficiency of the numerical approach is promisingly validated by the results of some preliminary numerical experiments

    The ADMM-PINNs Algorithmic Framework for Nonsmooth PDE-Constrained Optimization: A Deep Learning Approach

    Full text link
    We study the combination of the alternating direction method of multipliers (ADMM) with physics-informed neural networks (PINNs) for a general class of nonsmooth partial differential equation (PDE)-constrained optimization problems, where additional regularization can be employed for constraints on the control or design variables. The resulting ADMM-PINNs algorithmic framework substantially enlarges the applicable range of PINNs to nonsmooth cases of PDE-constrained optimization problems. The application of the ADMM makes it possible to untie the PDE constraints and the nonsmooth regularization terms for iterations. Accordingly, at each iteration, one of the resulting subproblems is a smooth PDE-constrained optimization which can be efficiently solved by PINNs, and the other is a simple nonsmooth optimization problem which usually has a closed-form solution or can be efficiently solved by various standard optimization algorithms or pre-trained neural networks. The ADMM-PINNs algorithmic framework does not require to solve PDEs repeatedly, and it is mesh-free, easy to implement, and scalable to different PDE settings. We validate the efficiency of the ADMM-PINNs algorithmic framework by different prototype applications, including inverse potential problems, source identification in elliptic equations, control constrained optimal control of the Burgers equation, and sparse optimal control of parabolic equations

    The Hard-Constraint PINNs for Interface Optimal Control Problems

    Full text link
    We show that the physics-informed neural networks (PINNs), in combination with some recently developed discontinuity capturing neural networks, can be applied to solve optimal control problems subject to partial differential equations (PDEs) with interfaces and some control constraints. The resulting algorithm is mesh-free and scalable to different PDEs, and it ensures the control constraints rigorously. Since the boundary and interface conditions, as well as the PDEs, are all treated as soft constraints by lumping them into a weighted loss function, it is necessary to learn them simultaneously and there is no guarantee that the boundary and interface conditions can be satisfied exactly. This immediately causes difficulties in tuning the weights in the corresponding loss function and training the neural networks. To tackle these difficulties and guarantee the numerical accuracy, we propose to impose the boundary and interface conditions as hard constraints in PINNs by developing a novel neural network architecture. The resulting hard-constraint PINNs approach guarantees that both the boundary and interface conditions can be satisfied exactly and they are decoupled from the learning of the PDEs. Its efficiency is promisingly validated by some elliptic and parabolic interface optimal control problems
    corecore