717 research outputs found
A sequential semidefinite programming method and an application in passive reduced-order modeling
We consider the solution of nonlinear programs with nonlinear
semidefiniteness constraints. The need for an efficient exploitation of the
cone of positive semidefinite matrices makes the solution of such nonlinear
semidefinite programs more complicated than the solution of standard nonlinear
programs. In particular, a suitable symmetrization procedure needs to be chosen
for the linearization of the complementarity condition. The choice of the
symmetrization procedure can be shifted in a very natural way to certain linear
semidefinite subproblems, and can thus be reduced to a well-studied problem.
The resulting sequential semidefinite programming (SSP) method is a
generalization of the well-known SQP method for standard nonlinear programs. We
present a sensitivity result for nonlinear semidefinite programs, and then
based on this result, we give a self-contained proof of local quadratic
convergence of the SSP method. We also describe a class of nonlinear
semidefinite programs that arise in passive reduced-order modeling, and we
report results of some numerical experiments with the SSP method applied to
problems in that class
Sequential Convex Programming Methods for Solving Nonlinear Optimization Problems with DC constraints
This paper investigates the relation between sequential convex programming
(SCP) as, e.g., defined in [24] and DC (difference of two convex functions)
programming. We first present an SCP algorithm for solving nonlinear
optimization problems with DC constraints and prove its convergence. Then we
combine the proposed algorithm with a relaxation technique to handle
inconsistent linearizations. Numerical tests are performed to investigate the
behaviour of the class of algorithms.Comment: 18 pages, 1 figur
Global rates of convergence for nonconvex optimization on manifolds
We consider the minimization of a cost function on a manifold using
Riemannian gradient descent and Riemannian trust regions (RTR). We focus on
satisfying necessary optimality conditions within a tolerance .
Specifically, we show that, under Lipschitz-type assumptions on the pullbacks
of to the tangent spaces of , both of these algorithms produce points
with Riemannian gradient smaller than in
iterations. Furthermore, RTR returns a point where also the Riemannian
Hessian's least eigenvalue is larger than in
iterations. There are no assumptions on initialization.
The rates match their (sharp) unconstrained counterparts as a function of the
accuracy (up to constants) and hence are sharp in that sense.
These are the first deterministic results for global rates of convergence to
approximate first- and second-order Karush-Kuhn-Tucker points on manifolds.
They apply in particular for optimization constrained to compact submanifolds
of , under simpler assumptions.Comment: 33 pages, IMA Journal of Numerical Analysis, 201
A Riemannian low-rank method for optimization over semidefinite matrices with block-diagonal constraints
We propose a new algorithm to solve optimization problems of the form for a smooth function under the constraints that is positive
semidefinite and the diagonal blocks of are small identity matrices. Such
problems often arise as the result of relaxing a rank constraint (lifting). In
particular, many estimation tasks involving phases, rotations, orthonormal
bases or permutations fit in this framework, and so do certain relaxations of
combinatorial problems such as Max-Cut. The proposed algorithm exploits the
facts that (1) such formulations admit low-rank solutions, and (2) their
rank-restricted versions are smooth optimization problems on a Riemannian
manifold. Combining insights from both the Riemannian and the convex geometries
of the problem, we characterize when second-order critical points of the smooth
problem reveal KKT points of the semidefinite problem. We compare against state
of the art, mature software and find that, on certain interesting problem
instances, what we call the staircase method is orders of magnitude faster, is
more accurate and scales better. Code is available.Comment: 37 pages, 3 figure
- …