9 research outputs found

    Optimal control of thermally coupled Navier Stokes equations

    Get PDF
    The optimal boundary temperature control of the stationary thermally coupled incompressible Navier-Stokes equation is considered. Well-posedness and existence of the optimal control and a necessary optimality condition are obtained. Optimization algorithms based on the augmented Lagrangian method with second order update are discussed. A test example motivated by control of transport process in the high pressure vapor transport (HVPT) reactor is presented to demonstrate the applicability of our theoretical results and proposed algorithm

    Learning solutions to some toy constrained optimization problems in infinite dimensional Hilbert spaces

    Full text link
    In this work we present deep learning implementations of two popular theoretical constrained optimization algorithms in infinite dimensional Hilbert spaces, namely, the penalty and the augmented Lagrangian methods. We test these algorithms on some toy problems originating in either calculus of variations or physics. We demonstrate that both methods are able to produce decent approximations for the test problems and are comparable in terms of different errors produced. Leveraging the common occurrence of the Lagrange multiplier update rule being computationally less expensive than solving subproblems in the penalty method, we achieve significant speedups in cases when the output of the constraint function is itself a function.Comment: 16 pages, 10 figure

    Gradient-Based Estimation of Uncertain Parameters for Elliptic Partial Differential Equations

    Full text link
    This paper addresses the estimation of uncertain distributed diffusion coefficients in elliptic systems based on noisy measurements of the model output. We formulate the parameter identification problem as an infinite dimensional constrained optimization problem for which we establish existence of minimizers as well as first order necessary conditions. A spectral approximation of the uncertain observations allows us to estimate the infinite dimensional problem by a smooth, albeit high dimensional, deterministic optimization problem, the so-called finite noise problem in the space of functions with bounded mixed derivatives. We prove convergence of finite noise minimizers to the appropriate infinite dimensional ones, and devise a stochastic augmented Lagrangian method for locating these numerically. Lastly, we illustrate our method with three numerical examples
    corecore