26,939 research outputs found
On representations of the feasible set in convex optimization
We consider the convex optimization problem where is convex, the feasible set K is convex and Slater's
condition holds, but the functions are not necessarily convex. We show
that for any representation of K that satisfies a mild nondegeneracy
assumption, every minimizer is a Karush-Kuhn-Tucker (KKT) point and conversely
every KKT point is a minimizer. That is, the KKT optimality conditions are
necessary and sufficient as in convex programming where one assumes that the
are convex. So in convex optimization, and as far as one is concerned
with KKT points, what really matters is the geometry of K and not so much its
representation.Comment: to appear in Optimization Letter
Convergence of the Lasserre Hierarchy of SDP Relaxations for Convex Polynomial Programs without Compactness
The Lasserre hierarchy of semidefinite programming (SDP) relaxations is an
effective scheme for finding computationally feasible SDP approximations of
polynomial optimization over compact semi-algebraic sets. In this paper, we
show that, for convex polynomial optimization, the Lasserre hierarchy with a
slightly extended quadratic module always converges asymptotically even in the
face of non-compact semi-algebraic feasible sets. We do this by exploiting a
coercivity property of convex polynomials that are bounded below. We further
establish that the positive definiteness of the Hessian of the associated
Lagrangian at a saddle-point (rather than the objective function at each
minimizer) guarantees finite convergence of the hierarchy. We obtain finite
convergence by first establishing a new sum-of-squares polynomial
representation of convex polynomials over convex semi-algebraic sets under a
saddle-point condition. We finally prove that the existence of a saddle-point
of the Lagrangian for a convex polynomial program is also necessary for the
hierarchy to have finite convergence.Comment: 17 page
Symmetry groups, semidefinite programs, and sums of squares
We investigate the representation of symmetric polynomials as a sum of
squares. Since this task is solved using semidefinite programming tools we
explore the geometric, algebraic, and computational implications of the
presence of discrete symmetries in semidefinite programs. It is shown that
symmetry exploitation allows a significant reduction in both matrix size and
number of decision variables. This result is applied to semidefinite programs
arising from the computation of sum of squares decompositions for multivariate
polynomials. The results, reinterpreted from an invariant-theoretic viewpoint,
provide a novel representation of a class of nonnegative symmetric polynomials.
The main theorem states that an invariant sum of squares polynomial is a sum of
inner products of pairs of matrices, whose entries are invariant polynomials.
In these pairs, one of the matrices is computed based on the real irreducible
representations of the group, and the other is a sum of squares matrix. The
reduction techniques enable the numerical solution of large-scale instances,
otherwise computationally infeasible to solve.Comment: 38 pages, submitte
- …