55,099 research outputs found

    Solving Optimal Control Problem of Monodomain Model Using Hybrid Conjugate Gradient Methods

    Get PDF
    We present the numerical solutions for the PDE-constrained optimization problem arising in cardiac electrophysiology, that is, the optimal control problem of monodomain model. The optimal control problem of monodomain model is a nonlinear optimization problem that is constrained by the monodomain model. The monodomain model consists of a parabolic partial differential equation coupled to a system of nonlinear ordinary differential equations, which has been widely used for simulating cardiac electrical activity. Our control objective is to dampen the excitation wavefront using optimal applied extracellular current. Two hybrid conjugate gradient methods are employed for computing the optimal applied extracellular current, namely, the Hestenes-Stiefel-Dai-Yuan (HS-DY) method and the Liu-Storey-Conjugate-Descent (LS-CD) method. Our experiment results show that the excitation wavefronts are successfully dampened out when these methods are used. Our experiment results also show that the hybrid conjugate gradient methods are superior to the classical conjugate gradient methods when Armijo line search is used

    The ADMM-PINNs Algorithmic Framework for Nonsmooth PDE-Constrained Optimization: A Deep Learning Approach

    Full text link
    We study the combination of the alternating direction method of multipliers (ADMM) with physics-informed neural networks (PINNs) for a general class of nonsmooth partial differential equation (PDE)-constrained optimization problems, where additional regularization can be employed for constraints on the control or design variables. The resulting ADMM-PINNs algorithmic framework substantially enlarges the applicable range of PINNs to nonsmooth cases of PDE-constrained optimization problems. The application of the ADMM makes it possible to untie the PDE constraints and the nonsmooth regularization terms for iterations. Accordingly, at each iteration, one of the resulting subproblems is a smooth PDE-constrained optimization which can be efficiently solved by PINNs, and the other is a simple nonsmooth optimization problem which usually has a closed-form solution or can be efficiently solved by various standard optimization algorithms or pre-trained neural networks. The ADMM-PINNs algorithmic framework does not require to solve PDEs repeatedly, and it is mesh-free, easy to implement, and scalable to different PDE settings. We validate the efficiency of the ADMM-PINNs algorithmic framework by different prototype applications, including inverse potential problems, source identification in elliptic equations, control constrained optimal control of the Burgers equation, and sparse optimal control of parabolic equations

    Optimal solvers for PDE-Constrained Optimization

    Get PDF
    Optimization problems with constraints which require the solution of a partial differential equation arise widely in many areas of the sciences and engineering, in particular in problems of design. The solution of such PDE-constrained optimization problems is usually a major computational task. Here we consider simple problems of this type: distributed control problems in which the 2- and 3-dimensional Poisson problem is the PDE. The large dimensional linear systems which result from discretization and which need to be solved are of saddle-point type. We introduce two optimal preconditioners for these systems which lead to convergence of symmetric Krylov subspace iterative methods in a number of iterations which does not increase with the dimension of the discrete problem. These preconditioners are block structured and involve standard multigrid cycles. The optimality of the preconditioned iterative solver is proved theoretically and verified computationally in several test cases. The theoretical proof indicates that these approaches may have much broader applicability for other partial differential equations

    Modeling and optimal control of multiphysics problems using the finite element method

    Get PDF
    Interdisciplinary research like constrained optimization of partial differential equations (PDE) for trajectory planning or feedback algorithms is an important topic. Recent technologies in high performance computing and progressing research in modeling techniques have enabled the feasibility to investigate multiphysics systems in the context of optimization problems. In this thesis a conductive heat transfer example is developed and techniques from PDE constrained optimization are used to solve trajectory planning problems. In addition, a laboratory experiment is designed to test the algorithms on a real world application. Moreover, an extensive investigation on coupling techniques for equations arising in convective heat transfer is given to provide a basis for optimal control problems regarding heating ventilation and air conditioning systems. Furthermore a novel approach using a flatness-based method for optimal control is derived. This concept allows input and state constraints in trajectory planning problems for partial differential equations combined with an efficient computation. The stated method is also extended to a Model Predictive Control closed-loop formulation. For illustration purposes, all stated problems include numerical examples

    Preconditioning iterative methods for the optimal control of the Stokes equation

    Get PDF
    Solving problems regarding the optimal control of partial differential equations (PDEs) – also known as PDE-constrained optimization – is a frontier area of numerical analysis. Of particular interest is the problem of flow control, where one would like to effect some desired flow by exerting, for example, an external force. The bottleneck in many current algorithms is the solution of the optimality system – a system of equations in saddle point form that is usually very large and ill-conditioned. In this paper we describe two preconditioners – a block-diagonal preconditioner for the minimal residual method and a block-lower triangular preconditioner for a non-standard conjugate gradient method – which can be effective when applied to such problems where the PDEs are the Stokes equations. We consider only distributed control here, although other problems – for example boundary control – could be treated in the same way. We give numerical results, and compare these with those obtained by solving the equivalent forward problem using similar technique

    Reduced Order Modeling for Nonlinear PDE-constrained Optimization using Neural Networks

    Full text link
    Nonlinear model predictive control (NMPC) often requires real-time solution to optimization problems. However, in cases where the mathematical model is of high dimension in the solution space, e.g. for solution of partial differential equations (PDEs), black-box optimizers are rarely sufficient to get the required online computational speed. In such cases one must resort to customized solvers. This paper present a new solver for nonlinear time-dependent PDE-constrained optimization problems. It is composed of a sequential quadratic programming (SQP) scheme to solve the PDE-constrained problem in an offline phase, a proper orthogonal decomposition (POD) approach to identify a lower dimensional solution space, and a neural network (NN) for fast online evaluations. The proposed method is showcased on a regularized least-square optimal control problem for the viscous Burgers' equation. It is concluded that significant online speed-up is achieved, compared to conventional methods using SQP and finite elements, at a cost of a prolonged offline phase and reduced accuracy.Comment: Accepted for publishing at the 58th IEEE Conference on Decision and Control, Nice, France, 11-13 December, https://cdc2019.ieeecss.org

    A stochastic gradient method for a class of nonlinear PDE-constrained optimal control problems under uncertainty

    Full text link
    The study of optimal control problems under uncertainty plays an important role in scientific numerical simulations. This class of optimization problems is strongly utilized in engineering, biology and finance. In this paper, a stochastic gradient method is proposed for the numerical resolution of a nonconvex stochastic optimization problem on a Hilbert space. We show that, under suitable assumptions, strong or weak accumulation points of the iterates produced by the method converge almost surely to stationary points of the original optimization problem. Measurability, local convergence, and convergence rates of a stationarity measure are handled, filling a gap for applications to nonconvex infinite dimensional stochastic optimization problems. The method is demonstrated on an optimal control problem constrained by a class of elliptic semilinear partial differential equations (PDEs) under uncertainty

    Optimization with learning-informed differential equation constraints and its applications

    Get PDF
    Inspired by applications in optimal control of semilinear elliptic partial differential equations and physics-integrated imaging, differential equation constrained optimization problems with constituents that are only accessible through data-driven techniques are studied. A particular focus is on the analysis and on numerical methods for problems with machine-learned components. For a rather general context, an error analysis is provided, and particular properties resulting from artificial neural network based approximations are addressed. Moreover, for each of the two inspiring applications analytical details are presented and numerical results are provided
    • …
    corecore