11,053 research outputs found
On barrier and modified barrier multigrid methods for 3d topology optimization
One of the challenges encountered in optimization of mechanical structures,
in particular in what is known as topology optimization, is the size of the
problems, which can easily involve millions of variables. A basic example is
the minimum compliance formulation of the variable thickness sheet (VTS)
problem, which is equivalent to a convex problem. We propose to solve the VTS
problem by the Penalty-Barrier Multiplier (PBM) method, introduced by R.\
Polyak and later studied by Ben-Tal and Zibulevsky and others. The most
computationally expensive part of the algorithm is the solution of linear
systems arising from the Newton method used to minimize a generalized augmented
Lagrangian. We use a special structure of the Hessian of this Lagrangian to
reduce the size of the linear system and to convert it to a form suitable for a
standard multigrid method. This converted system is solved approximately by a
multigrid preconditioned MINRES method. The proposed PBM algorithm is compared
with the optimality criteria (OC) method and an interior point (IP) method,
both using a similar iterative solver setup. We apply all three methods to
different loading scenarios. In our experiments, the PBM method clearly
outperforms the other methods in terms of computation time required to achieve
a certain degree of accuracy
An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming
Powerful interior-point methods (IPM) based commercial solvers, such as
Gurobi and Mosek, have been hugely successful in solving large-scale linear
programming (LP) problems. The high efficiency of these solvers depends
critically on the sparsity of the problem data and advanced matrix
factorization techniques. For a large scale LP problem with data matrix
that is dense (possibly structured) or whose corresponding normal matrix
has a dense Cholesky factor (even with re-ordering), these solvers may require
excessive computational cost and/or extremely heavy memory usage in each
interior-point iteration. Unfortunately, the natural remedy, i.e., the use of
iterative methods based IPM solvers, although can avoid the explicit
computation of the coefficient matrix and its factorization, is not practically
viable due to the inherent extreme ill-conditioning of the large scale normal
equation arising in each interior-point iteration. To provide a better
alternative choice for solving large scale LPs with dense data or requiring
expensive factorization of its normal equation, we propose a semismooth Newton
based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different
from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can
efficiently be used to solve simpler yet better conditioned semismooth Newton
linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic
superlinear convergence but is also proven to enjoy a finite termination
property. Numerical comparisons with Gurobi have demonstrated encouraging
potential of {\sc Snipal} for handling large-scale LP problems where the
constraint matrix has a dense representation or has a dense
factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920
characters", the abstract appearing here is slightly shorter than that in the
PDF fil
Some Preconditioning Techniques for Saddle Point Problems
Saddle point problems arise frequently in many applications in science and engineering, including constrained optimization, mixed finite element formulations of partial differential equations, circuit analysis, and so forth. Indeed the formulation of most problems with constraints gives rise to saddle point systems. This paper provides a concise overview of iterative approaches for the solution of such systems which are of particular importance in the context of large scale computation. In particular we describe some of the most useful preconditioning techniques for Krylov subspace solvers applied to saddle point problems, including block and constrained preconditioners.\ud
\ud
The work of Michele Benzi was supported in part by the National Science Foundation grant DMS-0511336
Natural preconditioners for saddle point systems
The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or discrete setting, so saddle point systems arising from discretization of partial differential equation problems such as those describing electromagnetic problems or incompressible flow lead to equations with this structure as does, for example, the widely used sequential quadratic programming approach to nonlinear optimization.\ud
This article concerns iterative solution methods for these problems and in particular shows how the problem formulation leads to natural preconditioners which guarantee rapid convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness -- in terms of rapidity of convergence -- is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends
A penalty method for PDE-constrained optimization in inverse problems
Many inverse and parameter estimation problems can be written as
PDE-constrained optimization problems. The goal, then, is to infer the
parameters, typically coefficients of the PDE, from partial measurements of the
solutions of the PDE for several right-hand-sides. Such PDE-constrained
problems can be solved by finding a stationary point of the Lagrangian, which
entails simultaneously updating the paramaters and the (adjoint) state
variables. For large-scale problems, such an all-at-once approach is not
feasible as it requires storing all the state variables. In this case one
usually resorts to a reduced approach where the constraints are explicitly
eliminated (at each iteration) by solving the PDEs. These two approaches, and
variations thereof, are the main workhorses for solving PDE-constrained
optimization problems arising from inverse problems. In this paper, we present
an alternative method that aims to combine the advantages of both approaches.
Our method is based on a quadratic penalty formulation of the constrained
optimization problem. By eliminating the state variable, we develop an
efficient algorithm that has roughly the same computational complexity as the
conventional reduced approach while exploiting a larger search space. Numerical
results show that this method indeed reduces some of the non-linearity of the
problem and is less sensitive the initial iterate
- …