2,210 research outputs found
Conic Optimization Theory: Convexification Techniques and Numerical Algorithms
Optimization is at the core of control theory and appears in several areas of
this field, such as optimal control, distributed control, system
identification, robust control, state estimation, model predictive control and
dynamic programming. The recent advances in various topics of modern
optimization have also been revamping the area of machine learning. Motivated
by the crucial role of optimization theory in the design, analysis, control and
operation of real-world systems, this tutorial paper offers a detailed overview
of some major advances in this area, namely conic optimization and its emerging
applications. First, we discuss the importance of conic optimization in
different areas. Then, we explain seminal results on the design of hierarchies
of convex relaxations for a wide range of nonconvex problems. Finally, we study
different numerical algorithms for large-scale conic optimization problems.Comment: 18 page
Bounding stationary averages of polynomial diffusions via semidefinite programming
We introduce an algorithm based on semidefinite programming that yields
increasing (resp. decreasing) sequences of lower (resp. upper) bounds on
polynomial stationary averages of diffusions with polynomial drift vector and
diffusion coefficients. The bounds are obtained by optimising an objective,
determined by the stationary average of interest, over the set of real vectors
defined by certain linear equalities and semidefinite inequalities which are
satisfied by the moments of any stationary measure of the diffusion. We
exemplify the use of the approach through several applications: a Bayesian
inference problem; the computation of Lyapunov exponents of linear ordinary
differential equations perturbed by multiplicative white noise; and a
reliability problem from structural mechanics. Additionally, we prove that the
bounds converge to the infimum and supremum of the set of stationary averages
for certain SDEs associated with the computation of the Lyapunov exponents, and
we provide numerical evidence of convergence in more general settings
A paradox in bosonic energy computations via semidefinite programming relaxations
We show that the recent hierarchy of semidefinite programming relaxations
based on non-commutative polynomial optimization and reduced density matrix
variational methods exhibits an interesting paradox when applied to the bosonic
case: even though it can be rigorously proven that the hierarchy collapses
after the first step, numerical implementations of higher order steps generate
a sequence of improving lower bounds that converges to the optimal solution. We
analyze this effect and compare it with similar behavior observed in
implementations of semidefinite programming relaxations for commutative
polynomial minimization. We conclude that the method converges due to the
rounding errors occurring during the execution of the numerical program, and
show that convergence is lost as soon as computer precision is incremented. We
support this conclusion by proving that for any element p of a Weyl algebra
which is non-negative in the Schrodinger representation there exists another
element p' arbitrarily close to p that admits a sum of squares decomposition.Comment: 22 pages, 4 figure
- …