839 research outputs found
Inverse Optimization of Convex Risk Functions
The theory of convex risk functions has now been well established as the
basis for identifying the families of risk functions that should be used in
risk averse optimization problems. Despite its theoretical appeal, the
implementation of a convex risk function remains difficult, as there is little
guidance regarding how a convex risk function should be chosen so that it also
well represents one's own risk preferences. In this paper, we address this
issue through the lens of inverse optimization. Specifically, given solution
data from some (forward) risk-averse optimization problems we develop an
inverse optimization framework that generates a risk function that renders the
solutions optimal for the forward problems. The framework incorporates the
well-known properties of convex risk functions, namely, monotonicity,
convexity, translation invariance, and law invariance, as the general
information about candidate risk functions, and also the feedbacks from
individuals, which include an initial estimate of the risk function and
pairwise comparisons among random losses, as the more specific information. Our
framework is particularly novel in that unlike classical inverse optimization,
no parametric assumption is made about the risk function, i.e. it is
non-parametric. We show how the resulting inverse optimization problems can be
reformulated as convex programs and are polynomially solvable if the
corresponding forward problems are polynomially solvable. We illustrate the
imputed risk functions in a portfolio selection problem and demonstrate their
practical value using real-life data
Inverse Optimization: Closed-form Solutions, Geometry and Goodness of fit
In classical inverse linear optimization, one assumes a given solution is a
candidate to be optimal. Real data is imperfect and noisy, so there is no
guarantee this assumption is satisfied. Inspired by regression, this paper
presents a unified framework for cost function estimation in linear
optimization comprising a general inverse optimization model and a
corresponding goodness-of-fit metric. Although our inverse optimization model
is nonconvex, we derive a closed-form solution and present the geometric
intuition. Our goodness-of-fit metric, , the coefficient of
complementarity, has similar properties to from regression and is
quasiconvex in the input data, leading to an intuitive geometric
interpretation. While is computable in polynomial-time, we derive a
lower bound that possesses the same properties, is tight for several important
model variations, and is even easier to compute. We demonstrate the application
of our framework for model estimation and evaluation in production planning and
cancer therapy
A new look at nonnegativity on closed sets and polynomial optimization
We first show that a continuous function f is nonnegative on a closed set
if and only if (countably many) moment matrices of some signed
measure with support equal to K, are all positive semidefinite
(if is compact is an arbitrary finite Borel measure with support
equal to K. In particular, we obtain a convergent explicit hierarchy of
semidefinite (outer) approximations with {\it no} lifting, of the cone of
nonnegative polynomials of degree at most . Wen used in polynomial
optimization on certain simple closed sets \K (like e.g., the whole space
, the positive orthant, a box, a simplex, or the vertices of the
hypercube), it provides a nonincreasing sequence of upper bounds which
converges to the global minimum by solving a hierarchy of semidefinite programs
with only one variable. This convergent sequence of upper bounds complements
the convergent sequence of lower bounds obtained by solving a hierarchy of
semidefinite relaxations
Linear convergence of accelerated conditional gradient algorithms in spaces of measures
A class of generalized conditional gradient algorithms for the solution of
optimization problem in spaces of Radon measures is presented. The method
iteratively inserts additional Dirac-delta functions and optimizes the
corresponding coefficients. Under general assumptions, a sub-linear
rate in the objective functional is obtained, which is sharp
in most cases. To improve efficiency, one can fully resolve the
finite-dimensional subproblems occurring in each iteration of the method. We
provide an analysis for the resulting procedure: under a structural assumption
on the optimal solution, a linear convergence rate is
obtained locally.Comment: 30 pages, 7 figure
- …