2,539 research outputs found
Multigrid Methods for Elliptic Optimal Control Problems
In this dissertation we study multigrid methods for linear-quadratic elliptic distributed optimal control problems.
For optimal control problems constrained by general second order elliptic partial differential equations, we design and analyze a finite element method based on a saddle point formulation. We construct a -cycle algorithm for the discrete problem and show that it is uniformly convergent in the energy norm for convex domains. Moreover, the contraction number decays at the optimal rate of , where is the number of smoothing steps. We also prove that the convergence is robust with respect to a regularization parameter. The robust convergence of -cycle and -cycle algorithms on general domains are demonstrated by numerical results.
For optimal control problems constrained by symmetric second order elliptic partial differential equations together with pointwise constraints on the state variable, we design and analyze symmetric positive definite finite element methods based on a reformulation of the optimal control problem as a fourth order variational inequality. We develop a multigrid algorithm for the reduced systems that appear in a primal-dual active set method for the discrete variational inequalities. The performance of the algorithm is demonstrated by numerical results
One shot methods for optimal control of distributed parameter systems 1: Finite dimensional control
The efficient numerical treatment of optimal control problems governed by elliptic partial differential equations (PDEs) and systems of elliptic PDEs, where the control is finite dimensional is discussed. Distributed control as well as boundary control cases are discussed. The main characteristic of the new methods is that they are designed to solve the full optimization problem directly, rather than accelerating a descent method by an efficient multigrid solver for the equations involved. The methods use the adjoint state in order to achieve efficient smoother and a robust coarsening strategy. The main idea is the treatment of the control variables on appropriate scales, i.e., control variables that correspond to smooth functions are solved for on coarse grids depending on the smoothness of these functions. Solution of the control problems is achieved with the cost of solving the constraint equations about two to three times (by a multigrid solver). Numerical examples demonstrate the effectiveness of the method proposed in distributed control case, pointwise control and boundary control problems
A robust all-at-once multigrid method for the Stokes control problem
In this paper we present an all-at-once multigrid method for a distributed Stokes control problem (velocity tracking problem). For solving such a problem, we use the fact that the solution is characterized by the optimality system (Karush-Kuhn-Tucker-system). The discretized optimality system is a large-scale linear system whose condition number depends on the grid size and on the choice of the regularization parameter forming a part of the problem. Recently, block-diagonal preconditioners have been proposed, which allow to solve the problem using a Krylov space method with convergence rates that are robust in both, the grid size and the regularization parameter or cost parameter. In the present paper, we develop an all-at-once multigrid method for a Stokes control problem and show robust convergence, more precisely, we show that the method converges with rates which are bounded away from one by a constant which is independent of the grid size and the choice of the regularization or cost parameter
Preconditioners for state constrained optimal control problems with Moreau-Yosida penalty function
Optimal control problems with partial differential equations as constraints play an important role in many applications. The inclusion of bound constraints for the state variable poses a significant challenge for optimization methods. Our focus here is on the incorporation of the constraints via the Moreau-Yosida regularization technique. This method has been studied recently and has proven to be advantageous compared to other approaches. In this paper we develop robust preconditioners for the efficient solution of the Newton steps associated with solving the Moreau-Yosida regularized problem. Numerical results illustrate the efficiency of our approach
Robust Optimization of PDEs with Random Coefficients Using a Multilevel Monte Carlo Method
This paper addresses optimization problems constrained by partial
differential equations with uncertain coefficients. In particular, the robust
control problem and the average control problem are considered for a tracking
type cost functional with an additional penalty on the variance of the state.
The expressions for the gradient and Hessian corresponding to either problem
contain expected value operators. Due to the large number of uncertainties
considered in our model, we suggest to evaluate these expectations using a
multilevel Monte Carlo (MLMC) method. Under mild assumptions, it is shown that
this results in the gradient and Hessian corresponding to the MLMC estimator of
the original cost functional. Furthermore, we show that the use of certain
correlated samples yields a reduction in the total number of samples required.
Two optimization methods are investigated: the nonlinear conjugate gradient
method and the Newton method. For both, a specific algorithm is provided that
dynamically decides which and how many samples should be taken in each
iteration. The cost of the optimization up to some specified tolerance
is shown to be proportional to the cost of a gradient evaluation with requested
root mean square error . The algorithms are tested on a model elliptic
diffusion problem with lognormal diffusion coefficient. An additional nonlinear
term is also considered.Comment: This work was presented at the IMG 2016 conference (Dec 5 - Dec 9,
2016), at the Copper Mountain conference (Mar 26 - Mar 30, 2017), and at the
FrontUQ conference (Sept 5 - Sept 8, 2017
- …