1,715 research outputs found
Some Preconditioning Techniques for Saddle Point Problems
Saddle point problems arise frequently in many applications in science and engineering, including constrained optimization, mixed finite element formulations of partial differential equations, circuit analysis, and so forth. Indeed the formulation of most problems with constraints gives rise to saddle point systems. This paper provides a concise overview of iterative approaches for the solution of such systems which are of particular importance in the context of large scale computation. In particular we describe some of the most useful preconditioning techniques for Krylov subspace solvers applied to saddle point problems, including block and constrained preconditioners.\ud
\ud
The work of Michele Benzi was supported in part by the National Science Foundation grant DMS-0511336
Preconditioners for state constrained optimal control problems with Moreau-Yosida penalty function
Optimal control problems with partial differential equations as constraints play an important role in many applications. The inclusion of bound constraints for the state variable poses a significant challenge for optimization methods. Our focus here is on the incorporation of the constraints via the Moreau-Yosida regularization technique. This method has been studied recently and has proven to be advantageous compared to other approaches. In this paper we develop robust preconditioners for the efficient solution of the Newton steps associated with solving the Moreau-Yosida regularized problem. Numerical results illustrate the efficiency of our approach
A Bramble-Pasciak conjugate gradient method for discrete Stokes equations with random viscosity
We study the iterative solution of linear systems of equations arising from
stochastic Galerkin finite element discretizations of saddle point problems. We
focus on the Stokes model with random data parametrized by uniformly
distributed random variables and discuss well-posedness of the variational
formulations. We introduce a Bramble-Pasciak conjugate gradient method as a
linear solver. It builds on a non-standard inner product associated with a
block triangular preconditioner. The block triangular structure enables more
sophisticated preconditioners than the block diagonal structure usually applied
in MINRES methods. We show how the existence requirements of a conjugate
gradient method can be met in our setting. We analyze the performance of the
solvers depending on relevant physical and numerical parameters by means of
eigenvalue estimates. For this purpose, we derive bounds for the eigenvalues of
the relevant preconditioned sub-matrices. We illustrate our findings using the
flow in a driven cavity as a numerical test case, where the viscosity is given
by a truncated Karhunen-Lo\`eve expansion of a random field. In this example, a
Bramble-Pasciak conjugate gradient method with block triangular preconditioner
outperforms a MINRES method with block diagonal preconditioner in terms of
iteration numbers.Comment: 19 pages, 1 figure, submitted to SIAM JU
Time-parallel iterative solvers for parabolic evolution equations
We present original time-parallel algorithms for the solution of the implicit
Euler discretization of general linear parabolic evolution equations with
time-dependent self-adjoint spatial operators. Motivated by the inf-sup theory
of parabolic problems, we show that the standard nonsymmetric time-global
system can be equivalently reformulated as an original symmetric saddle-point
system that remains inf-sup stable with respect to the same natural parabolic
norms. We then propose and analyse an efficient and readily implementable
parallel-in-time preconditioner to be used with an inexact Uzawa method. The
proposed preconditioner is non-intrusive and easy to implement in practice, and
also features the key theoretical advantages of robust spectral bounds, leading
to convergence rates that are independent of the number of time-steps, final
time, or spatial mesh sizes, and also a theoretical parallel complexity that
grows only logarithmically with respect to the number of time-steps. Numerical
experiments with large-scale parallel computations show the effectiveness of
the method, along with its good weak and strong scaling properties
Fast interior point solution of quadratic programming problems arising from PDE-constrained optimization
Interior point methods provide an attractive class of approaches for solving linear, quadratic and nonlinear programming problems, due to their excellent efficiency and wide applicability. In this paper, we consider PDE-constrained optimization problems with bound constraints on the state and control variables, and their representation on the discrete level as quadratic programming problems. To tackle complex problems and achieve high accuracy in the solution, one is required to solve matrix systems of huge scale resulting from Newton iteration, and hence fast and robust methods for these systems are required. We present preconditioned iterative techniques for solving a number of these problems using Krylov subspace methods, considering in what circumstances one may predict rapid convergence of the solvers in theory, as well as the solutions observed from practical computations
- …