24 research outputs found

    Solving optimal control problems governed by random Navier-Stokes equations using low-rank methods

    Full text link
    Many problems in computational science and engineering are simultaneously characterized by the following challenging issues: uncertainty, nonlinearity, nonstationarity and high dimensionality. Existing numerical techniques for such models would typically require considerable computational and storage resources. This is the case, for instance, for an optimization problem governed by time-dependent Navier-Stokes equations with uncertain inputs. In particular, the stochastic Galerkin finite element method often leads to a prohibitively high dimensional saddle-point system with tensor product structure. In this paper, we approximate the solution by the low-rank Tensor Train decomposition, and present a numerically efficient algorithm to solve the optimality equations directly in the low-rank representation. We show that the solution of the vorticity minimization problem with a distributed control admits a representation with ranks that depend modestly on model and discretization parameters even for high Reynolds numbers. For lower Reynolds numbers this is also the case for a boundary control. This opens the way for a reduced-order modeling of the stochastic optimal flow control with a moderate cost at all stages.Comment: 29 page

    Robust Optimization of PDEs with Random Coefficients Using a Multilevel Monte Carlo Method

    Full text link
    This paper addresses optimization problems constrained by partial differential equations with uncertain coefficients. In particular, the robust control problem and the average control problem are considered for a tracking type cost functional with an additional penalty on the variance of the state. The expressions for the gradient and Hessian corresponding to either problem contain expected value operators. Due to the large number of uncertainties considered in our model, we suggest to evaluate these expectations using a multilevel Monte Carlo (MLMC) method. Under mild assumptions, it is shown that this results in the gradient and Hessian corresponding to the MLMC estimator of the original cost functional. Furthermore, we show that the use of certain correlated samples yields a reduction in the total number of samples required. Two optimization methods are investigated: the nonlinear conjugate gradient method and the Newton method. For both, a specific algorithm is provided that dynamically decides which and how many samples should be taken in each iteration. The cost of the optimization up to some specified tolerance τ\tau is shown to be proportional to the cost of a gradient evaluation with requested root mean square error τ\tau. The algorithms are tested on a model elliptic diffusion problem with lognormal diffusion coefficient. An additional nonlinear term is also considered.Comment: This work was presented at the IMG 2016 conference (Dec 5 - Dec 9, 2016), at the Copper Mountain conference (Mar 26 - Mar 30, 2017), and at the FrontUQ conference (Sept 5 - Sept 8, 2017

    Chance constraints in PDE constrained optimization

    Get PDF
    Chance constraints represent a popular tool for finding decisions that enforce a robust satisfaction of random inequality systems in terms of probability. They are widely used in optimization problems subject to uncertain parameters as they arise in many engineering applications. Most structural results of chance constraints (e.g., closedness, convexity, Lipschitz continuity, differentiability etc.) have been formulated in a finite-dimensional setting. The aim of this paper is to generalize some of these well-known semi-continuity and convexity properties to a setting of control problems subject to (uniform) state chance constraints

    Adaptive sampling strategies for risk-averse stochastic optimization with constraints

    Get PDF
    We introduce adaptive sampling methods for risk-neutral and risk-averse stochastic programs with deterministic constraints. In particular, we propose a variant of the stochastic projected gradient method where the sample size used to approximate the reduced gradient is determined a posteriori and updated adaptively. We also propose an SQP-type method based on similar adaptive sampling principles. Both methods lead to a significant reduction in cost. Numerical experiments from finance and engineering illustrate the performance and efficacy of the presented algorithms. The methods here are applicable to a broad class of expectation-based risk measures, however, we focus mainly on expected risk and conditional value-at-risk minimization problems

    Optimality Conditions for Convex Stochastic Optimization Problems in Banach Spaces with Almost Sure State Constraints

    Get PDF
    We analyze a convex stochastic optimization problem where the state is assumed to belong to the Bochner space of essentially bounded random variables with images in a reflexive and separable Banach space. For this problem, we obtain optimality conditions that are, with an appropriate model, necessary and sufficient. Additionally, the Lagrange multipliers associated with optimality conditions are integrable vector-valued functions and not only measures. A model problem is given demonstrating the application to PDE-constrained optimization under uncertainty with an outlook for further applications
    corecore