2,948 research outputs found
Convex inner approximations of nonconvex semialgebraic sets applied to fixed-order controller design
We describe an elementary algorithm to build convex inner approximations of
nonconvex sets. Both input and output sets are basic semialgebraic sets given
as lists of defining multivariate polynomials. Even though no optimality
guarantees can be given (e.g. in terms of volume maximization for bounded
sets), the algorithm is designed to preserve convex boundaries as much as
possible, while removing regions with concave boundaries. In particular, the
algorithm leaves invariant a given convex set. The algorithm is based on
Gloptipoly 3, a public-domain Matlab package solving nonconvex polynomial
optimization problems with the help of convex semidefinite programming
(optimization over linear matrix inequalities, or LMIs). We illustrate how the
algorithm can be used to design fixed-order controllers for linear systems,
following a polynomial approach
OSQP: An Operator Splitting Solver for Quadratic Programs
We present a general-purpose solver for convex quadratic programs based on
the alternating direction method of multipliers, employing a novel operator
splitting technique that requires the solution of a quasi-definite linear
system with the same coefficient matrix at almost every iteration. Our
algorithm is very robust, placing no requirements on the problem data such as
positive definiteness of the objective function or linear independence of the
constraint functions. It can be configured to be division-free once an initial
matrix factorization is carried out, making it suitable for real-time
applications in embedded systems. In addition, our technique is the first
operator splitting method for quadratic programs able to reliably detect primal
and dual infeasible problems from the algorithm iterates. The method also
supports factorization caching and warm starting, making it particularly
efficient when solving parametrized problems arising in finance, control, and
machine learning. Our open-source C implementation OSQP has a small footprint,
is library-free, and has been extensively tested on many problem instances from
a wide variety of application areas. It is typically ten times faster than
competing interior-point methods, and sometimes much more when factorization
caching or warm start is used. OSQP has already shown a large impact with tens
of thousands of users both in academia and in large corporations
Computational Complexity versus Statistical Performance on Sparse Recovery Problems
We show that several classical quantities controlling compressed sensing
performance directly match classical parameters controlling algorithmic
complexity. We first describe linearly convergent restart schemes on
first-order methods solving a broad range of compressed sensing problems, where
sharpness at the optimum controls convergence speed. We show that for sparse
recovery problems, this sharpness can be written as a condition number, given
by the ratio between true signal sparsity and the largest signal size that can
be recovered by the observation matrix. In a similar vein, Renegar's condition
number is a data-driven complexity measure for convex programs, generalizing
classical condition numbers for linear systems. We show that for a broad class
of compressed sensing problems, the worst case value of this algorithmic
complexity measure taken over all signals matches the restricted singular value
of the observation matrix which controls robust recovery performance. Overall,
this means in both cases that, in compressed sensing problems, a single
parameter directly controls both computational complexity and recovery
performance. Numerical experiments illustrate these points using several
classical algorithms.Comment: Final version, to appear in information and Inferenc
On barrier and modified barrier multigrid methods for 3d topology optimization
One of the challenges encountered in optimization of mechanical structures,
in particular in what is known as topology optimization, is the size of the
problems, which can easily involve millions of variables. A basic example is
the minimum compliance formulation of the variable thickness sheet (VTS)
problem, which is equivalent to a convex problem. We propose to solve the VTS
problem by the Penalty-Barrier Multiplier (PBM) method, introduced by R.\
Polyak and later studied by Ben-Tal and Zibulevsky and others. The most
computationally expensive part of the algorithm is the solution of linear
systems arising from the Newton method used to minimize a generalized augmented
Lagrangian. We use a special structure of the Hessian of this Lagrangian to
reduce the size of the linear system and to convert it to a form suitable for a
standard multigrid method. This converted system is solved approximately by a
multigrid preconditioned MINRES method. The proposed PBM algorithm is compared
with the optimality criteria (OC) method and an interior point (IP) method,
both using a similar iterative solver setup. We apply all three methods to
different loading scenarios. In our experiments, the PBM method clearly
outperforms the other methods in terms of computation time required to achieve
a certain degree of accuracy
Computing Optimal Designs of multiresponse Experiments reduces to Second-Order Cone Programming
Elfving's Theorem is a major result in the theory of optimal experimental
design, which gives a geometrical characterization of optimality. In this
paper, we extend this theorem to the case of multiresponse experiments, and we
show that when the number of experiments is finite, and optimal
design of multiresponse experiments can be computed by Second-Order Cone
Programming (SOCP). Moreover, our SOCP approach can deal with design problems
in which the variable is subject to several linear constraints.
We give two proofs of this generalization of Elfving's theorem. One is based
on Lagrangian dualization techniques and relies on the fact that the
semidefinite programming (SDP) formulation of the multiresponse optimal
design always has a solution which is a matrix of rank . Therefore, the
complexity of this problem fades.
We also investigate a \emph{model robust} generalization of optimality,
for which an Elfving-type theorem was established by Dette (1993). We show with
the same Lagrangian approach that these model robust designs can be computed
efficiently by minimizing a geometric mean under some norm constraints.
Moreover, we show that the optimality conditions of this geometric programming
problem yield an extension of Dette's theorem to the case of multiresponse
experiments.
When the number of unknown parameters is small, or when the number of linear
functions of the parameters to be estimated is small, we show by numerical
examples that our approach can be between 10 and 1000 times faster than the
classic, state-of-the-art algorithms
- âŠ