6,776 research outputs found
Simple Approximations of Semialgebraic Sets and their Applications to Control
Many uncertainty sets encountered in control systems analysis and design can
be expressed in terms of semialgebraic sets, that is as the intersection of
sets described by means of polynomial inequalities. Important examples are for
instance the solution set of linear matrix inequalities or the Schur/Hurwitz
stability domains. These sets often have very complicated shapes (non-convex,
and even non-connected), which renders very difficult their manipulation. It is
therefore of considerable importance to find simple-enough approximations of
these sets, able to capture their main characteristics while maintaining a low
level of complexity. For these reasons, in the past years several convex
approximations, based for instance on hyperrect-angles, polytopes, or
ellipsoids have been proposed. In this work, we move a step further, and
propose possibly non-convex approximations , based on a small volume polynomial
superlevel set of a single positive polynomial of given degree. We show how
these sets can be easily approximated by minimizing the L1 norm of the
polynomial over the semialgebraic set, subject to positivity constraints.
Intuitively, this corresponds to the trace minimization heuristic commonly
encounter in minimum volume ellipsoid problems. From a computational viewpoint,
we design a hierarchy of linear matrix inequality problems to generate these
approximations, and we provide theoretically rigorous convergence results, in
the sense that the hierarchy of outer approximations converges in volume (or,
equivalently, almost everywhere and almost uniformly) to the original set. Two
main applications of the proposed approach are considered. The first one aims
at reconstruction/approximation of sets from a finite number of samples. In the
second one, we show how the concept of polynomial superlevel set can be used to
generate samples uniformly distributed on a given semialgebraic set. The
efficiency of the proposed approach is demonstrated by different numerical
examples
Immunizing Conic Quadratic Optimization Problems Against Implementation Errors
We show that the robust counterpart of a convex quadratic constraint with ellipsoidal implementation error is equivalent to a system of conic quadratic constraints. To prove this result we first derive a sharper result for the S-lemma in case the two matrices involved can be simultaneously diagonalized. This extension of the S-lemma may also be useful for other purposes. We extend the result to the case in which the uncertainty region is the intersection of two convex quadratic inequalities. The robust counterpart for this case is also equivalent to a system of conic quadratic constraints. Results for convex conic quadratic constraints with implementation error are also given. We conclude with showing how the theory developed can be applied in robust linear optimization with jointly uncertain parameters and implementation errors, in sequential robust quadratic programming, in Taguchi’s robust approach, and in the adjustable robust counterpart.Conic Quadratic Program;hidden convexity;implementation error;robust optimization;simultaneous diagonalizability;S-lemma
Algorithms for the statistical design of electrical circuits
Imperial Users onl
Manifold Optimization Over the Set of Doubly Stochastic Matrices: A Second-Order Geometry
Convex optimization is a well-established research area with applications in
almost all fields. Over the decades, multiple approaches have been proposed to
solve convex programs. The development of interior-point methods allowed
solving a more general set of convex programs known as semi-definite programs
and second-order cone programs. However, it has been established that these
methods are excessively slow for high dimensions, i.e., they suffer from the
curse of dimensionality. On the other hand, optimization algorithms on manifold
have shown great ability in finding solutions to nonconvex problems in
reasonable time. This paper is interested in solving a subset of convex
optimization using a different approach. The main idea behind Riemannian
optimization is to view the constrained optimization problem as an
unconstrained one over a restricted search space. The paper introduces three
manifolds to solve convex programs under particular box constraints. The
manifolds, called the doubly stochastic, symmetric and the definite multinomial
manifolds, generalize the simplex also known as the multinomial manifold. The
proposed manifolds and algorithms are well-adapted to solving convex programs
in which the variable of interest is a multidimensional probability
distribution function. Theoretical analysis and simulation results testify the
efficiency of the proposed method over state of the art methods. In particular,
they reveal that the proposed framework outperforms conventional generic and
specialized solvers, especially in high dimensions
- …