29,814 research outputs found
Constraint interface preconditioning for topology optimization problems
The discretization of constrained nonlinear optimization problems arising in
the field of topology optimization yields algebraic systems which are
challenging to solve in practice, due to pathological ill-conditioning, strong
nonlinearity and size. In this work we propose a methodology which brings
together existing fast algorithms, namely, interior-point for the optimization
problem and a novel substructuring domain decomposition method for the ensuing
large-scale linear systems. The main contribution is the choice of interface
preconditioner which allows for the acceleration of the domain decomposition
method, leading to performance independent of problem size.Comment: To be published in SIAM J. Sci. Com
MM Algorithms for Geometric and Signomial Programming
This paper derives new algorithms for signomial programming, a generalization
of geometric programming. The algorithms are based on a generic principle for
optimization called the MM algorithm. In this setting, one can apply the
geometric-arithmetic mean inequality and a supporting hyperplane inequality to
create a surrogate function with parameters separated. Thus, unconstrained
signomial programming reduces to a sequence of one-dimensional minimization
problems. Simple examples demonstrate that the MM algorithm derived can
converge to a boundary point or to one point of a continuum of minimum points.
Conditions under which the minimum point is unique or occurs in the interior of
parameter space are proved for geometric programming. Convergence to an
interior point occurs at a linear rate. Finally, the MM framework easily
accommodates equality and inequality constraints of signomial type. For the
most important special case, constrained quadratic programming, the MM
algorithm involves very simple updates.Comment: 16 pages, 1 figur
Global rates of convergence for nonconvex optimization on manifolds
We consider the minimization of a cost function on a manifold using
Riemannian gradient descent and Riemannian trust regions (RTR). We focus on
satisfying necessary optimality conditions within a tolerance .
Specifically, we show that, under Lipschitz-type assumptions on the pullbacks
of to the tangent spaces of , both of these algorithms produce points
with Riemannian gradient smaller than in
iterations. Furthermore, RTR returns a point where also the Riemannian
Hessian's least eigenvalue is larger than in
iterations. There are no assumptions on initialization.
The rates match their (sharp) unconstrained counterparts as a function of the
accuracy (up to constants) and hence are sharp in that sense.
These are the first deterministic results for global rates of convergence to
approximate first- and second-order Karush-Kuhn-Tucker points on manifolds.
They apply in particular for optimization constrained to compact submanifolds
of , under simpler assumptions.Comment: 33 pages, IMA Journal of Numerical Analysis, 201
- …