125 research outputs found

    A Bramble-Pasciak conjugate gradient method for discrete Stokes equations with random viscosity

    Full text link
    We study the iterative solution of linear systems of equations arising from stochastic Galerkin finite element discretizations of saddle point problems. We focus on the Stokes model with random data parametrized by uniformly distributed random variables and discuss well-posedness of the variational formulations. We introduce a Bramble-Pasciak conjugate gradient method as a linear solver. It builds on a non-standard inner product associated with a block triangular preconditioner. The block triangular structure enables more sophisticated preconditioners than the block diagonal structure usually applied in MINRES methods. We show how the existence requirements of a conjugate gradient method can be met in our setting. We analyze the performance of the solvers depending on relevant physical and numerical parameters by means of eigenvalue estimates. For this purpose, we derive bounds for the eigenvalues of the relevant preconditioned sub-matrices. We illustrate our findings using the flow in a driven cavity as a numerical test case, where the viscosity is given by a truncated Karhunen-Lo\`eve expansion of a random field. In this example, a Bramble-Pasciak conjugate gradient method with block triangular preconditioner outperforms a MINRES method with block diagonal preconditioner in terms of iteration numbers.Comment: 19 pages, 1 figure, submitted to SIAM JU

    Fast solution of Cahn-Hilliard variational inequalities using implicit time discretization and finite elements

    Get PDF
    We consider the e�cient solution of the Cahn-Hilliard variational inequality using an implicit time discretization, which is formulated as an optimal control problem with pointwise constraints on the control. By applying a semi-smooth Newton method combined with a Moreau-Yosida regularization technique for handling the control constraints we show superlinear convergence in function space. At the heart of this method lies the solution of large and sparse linear systems for which we propose the use of preconditioned Krylov subspace solvers using an e�ective Schur complement approximation. Numerical results illustrate the competitiveness of this approach

    All-at-once preconditioning in PDE-constrained optimization

    Get PDF
    The optimization of functions subject to partial differential equations (PDE) plays an important role in many areas of science and industry. In this paper we introduce the basic concepts of PDE-constrained optimization and show how the all-at-once approach will lead to linear systems in saddle point form. We will discuss implementation details and different boundary conditions. We then show how these system can be solved efficiently and discuss methods and preconditioners also in the case when bound constraints for the control are introduced. Numerical results will illustrate the competitiveness of our techniques

    PARALLEL ALGORITHMS FOR NONLINEAR PROGRAMMING AND APPLICATIONS IN PHARMACEUTICAL MANUFACTURING

    Get PDF
    Effective manufacturing of pharmaceuticals presents a number of challenging optimization problems due to complex distributed, time-independent models and the need to handle uncertainty. These challenges are multiplied when real-time solutions are required. The demand for fast solution of nonlinear optimization problems, coupled with the emergence of new concurrent computing architectures, drives the need for parallel algorithms to solve challenging NLP problems. The goal of this work is the development of parallel algorithms for nonlinear programming problems on different computing architectures, and the application of large-scale nonlinear programming on challenging problems in pharmaceutical manufacturing

    An Efficient Interior-Point Decomposition Algorithm for Parallel Solution of Large-Scale Nonlinear Problems with Significant Variable Coupling

    Get PDF
    In this dissertation we develop multiple algorithms for efficient parallel solution of structured nonlinear programming problems by decomposition of the linear augmented system solved at each iteration of a nonlinear interior-point approach. In particular, we address large-scale, block-structured problems with a significant number of complicating, or coupling variables. This structure arises in many important problem classes including multi-scenario optimization, parameter estimation, two-stage stochastic programming, optimal control and power network problems. The structure of these problems induces a block-angular structure in the augmented system, and parallel solution is possible using a Schur-complement decomposition. Three major variants are implemented: a serial, full-space interior-point method, serial and parallel versions of an explicit Schur-complement decomposition, and serial and parallel versions of an implicit PCG-based Schur-complement decomposition. All of these algorithms have been implemented in C++ in an extensible software framework for nonlinear optimization. The explicit Schur-complement decomposition is typically effective for problems with a few hundred coupling variables. We demonstrate the performance of our implementation on an important problem in optimal power grid operation, the contingency-constrained AC optimal power ow problem. In this dissertation, we present a rectangular IV formulation for the contingency-constrained ACOPF problem and demonstrate that the explicit Schur-complement decomposition can dramatically reduce solution times for a problem with a large number of contingency scenarios. Moreover, a comparison of the explicit Schur-complement decomposition implementation and the Progressive Hedging approach provided by Pyomo is provided, showing that the internal decomposition approach is computationally favorable to the external approach. However, the explicit Schur-complement decomposition approach is not appropriate for problems with a large number of coupling variables because of the high computational cost associated with forming and solving the dense Schur-complement. We show that this bottleneck can be overcome by solving the Schur-complement equations implicitly using a quasi-Newton preconditioned conjugate gradient method. This new algorithm avoids explicit formation and factorization of the Schur-complement. The computational efficiency of the serial and parallel versions of this algorithm are compared with the serial full-space approach, and the serial and parallel explicit Schur-complement approach on a set of quadratic parameter estimation problems and nonlinear optimization problems. These results show that the PCG implicit Schur-complement approach dramatically reduces the computational expense for problems with many coupling variables

    Preconditioning for active set and projected gradient methods as\ud semi-smooth Newton methods for PDE-constrained optimization\ud with control constraints

    Get PDF
    Optimal control problems with partial differential equations play an important role in many applications. The inclusion of bound constraints for the control poses a significant additional challenge for optimization methods. In this paper we propose preconditioners for the saddle point problems that arise when a primal-dual active set method is used. We also show for this method that the same saddle point system can be derived when the method is considered as a semi-smooth Newton method. In addition, the projected gradient method can be employed to solve optimization problems with simple bounds and we discuss the efficient solution of the linear systems in question. In the case when an acceleration technique is employed for the projected gradient method, this again yields a semi-smooth Newton method that is equivalent to the primal-dual active set method. Numerical results illustrate the competitiveness of this approach

    A low-rank in time approach to PDE-constrained optimization

    Get PDF
    • …
    corecore