520 research outputs found

    Stochastic Galerkin Method for Optimal Control Problem Governed by Random Elliptic PDE with State Constraints

    Get PDF
    In this paper, we investigate a stochastic Galerkin approximation scheme for an optimal control problem governed by an elliptic PDE with random field in its coefficients. The optimal control minimizes the expectation of a cost functional with mean-state constraints. We first represent the stochastic elliptic PDE in terms of the generalized polynomial chaos expansion and obtain the parameterized optimal control problems. By applying the Slater condition in the subdifferential calculus, we obtain the necessary and sufficient optimality conditions for the state-constrained stochastic optimal control problem for the first time in the literature. We then establish a stochastic Galerkin scheme to approximate the optimality system in the spatial space and the probability space. Then the a priori error estimates are derived for the state, the co-state and the control variables. A projection algorithm is proposed and analyzed. Numerical examples are presented to illustrate our theoretical results

    Analytic Regularity and GPC Approximation for Control Problems Constrained by Linear Parametric Elliptic and Parabolic PDEs

    Get PDF
    This paper deals with linear-quadratic optimal control problems constrained by a parametric or stochastic elliptic or parabolic PDE. We address the (difficult) case that the state equation depends on a countable number of parameters i.e., on σj\sigma_j with jNj\in\N, and that the PDE operator may depend non-affinely on the parameters. We consider tracking-type functionals and distributed as well as boundary controls. Building on recent results in [CDS1, CDS2], we show that the state and the control are analytic as functions depending on these parameters σj\sigma_j. We establish sparsity of generalized polynomial chaos (gpc) expansions of both, state and control, in terms of the stochastic coordinate sequence σ=(σj)j1\sigma = (\sigma_j)_{j\ge 1} of the random inputs, and prove convergence rates of best NN-term truncations of these expansions. Such truncations are the key for subsequent computations since they do {\em not} assume that the stochastic input data has a finite expansion. In the follow-up paper [KS2], we explain two methods how such best NN-term truncations can practically be computed, by greedy-type algorithms as in [SG, Gi1], or by multilevel Monte-Carlo methods as in [KSS]. The sparsity result allows in conjunction with adaptive wavelet Galerkin schemes for sparse, adaptive tensor discretizations of control problems constrained by linear elliptic and parabolic PDEs developed in [DK, GK, K], see [KS2]

    Solving optimal control problems governed by random Navier-Stokes equations using low-rank methods

    Full text link
    Many problems in computational science and engineering are simultaneously characterized by the following challenging issues: uncertainty, nonlinearity, nonstationarity and high dimensionality. Existing numerical techniques for such models would typically require considerable computational and storage resources. This is the case, for instance, for an optimization problem governed by time-dependent Navier-Stokes equations with uncertain inputs. In particular, the stochastic Galerkin finite element method often leads to a prohibitively high dimensional saddle-point system with tensor product structure. In this paper, we approximate the solution by the low-rank Tensor Train decomposition, and present a numerically efficient algorithm to solve the optimality equations directly in the low-rank representation. We show that the solution of the vorticity minimization problem with a distributed control admits a representation with ranks that depend modestly on model and discretization parameters even for high Reynolds numbers. For lower Reynolds numbers this is also the case for a boundary control. This opens the way for a reduced-order modeling of the stochastic optimal flow control with a moderate cost at all stages.Comment: 29 page

    Robust Optimization of PDEs with Random Coefficients Using a Multilevel Monte Carlo Method

    Full text link
    This paper addresses optimization problems constrained by partial differential equations with uncertain coefficients. In particular, the robust control problem and the average control problem are considered for a tracking type cost functional with an additional penalty on the variance of the state. The expressions for the gradient and Hessian corresponding to either problem contain expected value operators. Due to the large number of uncertainties considered in our model, we suggest to evaluate these expectations using a multilevel Monte Carlo (MLMC) method. Under mild assumptions, it is shown that this results in the gradient and Hessian corresponding to the MLMC estimator of the original cost functional. Furthermore, we show that the use of certain correlated samples yields a reduction in the total number of samples required. Two optimization methods are investigated: the nonlinear conjugate gradient method and the Newton method. For both, a specific algorithm is provided that dynamically decides which and how many samples should be taken in each iteration. The cost of the optimization up to some specified tolerance τ\tau is shown to be proportional to the cost of a gradient evaluation with requested root mean square error τ\tau. The algorithms are tested on a model elliptic diffusion problem with lognormal diffusion coefficient. An additional nonlinear term is also considered.Comment: This work was presented at the IMG 2016 conference (Dec 5 - Dec 9, 2016), at the Copper Mountain conference (Mar 26 - Mar 30, 2017), and at the FrontUQ conference (Sept 5 - Sept 8, 2017

    Numerical Methods for PDE Constrained Optimization with Uncertain Data

    Get PDF
    Optimization problems governed by partial differential equations (PDEs) arise in many applications in the form of optimal control, optimal design, or parameter identification problems. In most applications, parameters in the governing PDEs are not deterministic, but rather have to be modeled as random variables or, more generally, as random fields. It is crucial to capture and quantify the uncertainty in such problems rather than to simply replace the uncertain coefficients with their mean values. However, treating the uncertainty adequately and in a computationally tractable manner poses many mathematical challenges. The numerical solution of optimization problems governed by stochastic PDEs builds on mathematical subareas, which so far have been largely investigated in separate communities: Stochastic Programming, Numerical Solution of Stochastic PDEs, and PDE Constrained Optimization. The workshop achieved an impulse towards cross-fertilization of those disciplines which also was the subject of several scientific discussions. It is to be expected that future exchange of ideas between these areas will give rise to new insights and powerful new numerical methods

    Low rank approximation method for perturbed linear systems with applications to elliptic type stochastic PDEs

    Full text link
    In this paper, we propose a low rank approximation method for efficiently solving stochastic partial differential equations. Specifically, our method utilizes a novel low rank approximation of the stiffness matrices, which can significantly reduce the computational load and storage requirements associated with matrix inversion without losing accuracy. To demonstrate the versatility and applicability of our method, we apply it to address two crucial uncertainty quantification problems: stochastic elliptic equations and optimal control problems governed by stochastic elliptic PDE constraints. Based on varying dimension reduction ratios, our algorithm exhibits the capability to yield a high precision numerical solution for stochastic partial differential equations, or provides a rough representation of the exact solutions as a pre-processing phase. Meanwhile, our algorithm for solving stochastic optimal control problems allows a diverse range of gradient-based unconstrained optimization methods, rendering it particularly appealing for computationally intensive large-scale problems. Numerical experiments are conducted and the results provide strong validation of the feasibility and effectiveness of our algorithm
    corecore