153 research outputs found
Summary Conclusions on Computational Experience and the Explanatory Value of Condition Measures for Linear Optimization*
The modern theory of condition measures for convex optimization problems was initially developed for convex problems in conic format, and several aspects of the theory have now been extended to handle non-conic formats as well. In this theory, the (Renegar-) condition measure C(d) for a problem instance with data d=(A,b,c) has been shown to be connected to bounds on a wide variety of behavioral and computational characteristics of the problem instance, from sizes of optimal solutions to the complexity of algorithms. Herein we test the practical relevance of the condition measure theory, as applied to linear optimization problems that one might typically encounter in practice. Using the NETLIB suite of linear optimization problems as a test bed, we found that 71% of the NETLIB suite problem instances have infinite condition measure. In order to examine condition measures of the problems that are the actual input to a modern IPM solver, we also computed condition measures for the NETLIB suite problems after pre-preprocessing by CPLEX 7.1. Here we found that 19% of the post-processed problem instances in the NETLIB suite have infinite condition measure, and that log C(d) of the post-processed problems is fairly nicely distributed. Furthermore, there is a positive linear relationship between IPM iterations and log C(d) of the post-processed problem instances (significant at the 95% confidence level), and 42% of the variation in IPM iterations among the NETLIB suite problem instances is accounted for by log C(d) of the post-processed problem instances.Singapore-MIT Alliance (SMA
Computational Complexity versus Statistical Performance on Sparse Recovery Problems
We show that several classical quantities controlling compressed sensing
performance directly match classical parameters controlling algorithmic
complexity. We first describe linearly convergent restart schemes on
first-order methods solving a broad range of compressed sensing problems, where
sharpness at the optimum controls convergence speed. We show that for sparse
recovery problems, this sharpness can be written as a condition number, given
by the ratio between true signal sparsity and the largest signal size that can
be recovered by the observation matrix. In a similar vein, Renegar's condition
number is a data-driven complexity measure for convex programs, generalizing
classical condition numbers for linear systems. We show that for a broad class
of compressed sensing problems, the worst case value of this algorithmic
complexity measure taken over all signals matches the restricted singular value
of the observation matrix which controls robust recovery performance. Overall,
this means in both cases that, in compressed sensing problems, a single
parameter directly controls both computational complexity and recovery
performance. Numerical experiments illustrate these points using several
classical algorithms.Comment: Final version, to appear in information and Inferenc
Gordon's inequality and condition numbers in conic optimization
The probabilistic analysis of condition numbers has traditionally been
approached from different angles; one is based on Smale's program in complexity
theory and features integral geometry, while the other is motivated by
geometric functional analysis and makes use of the theory of Gaussian
processes. In this note we explore connections between the two approaches in
the context of the biconic homogeneous feasiblity problem and the condition
numbers motivated by conic optimization theory. Key tools in the analysis are
Slepian's and Gordon's comparision inequalities for Gaussian processes,
interpreted as monotonicity properties of moment functionals, and their
interplay with ideas from conic integral geometry
Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system
"December 1998."Includes bibliographical references (p. 36-38).by M. Epelman and R. Freund
Computational experience and the explanatory value of condition numbers for linear optimization
Abstract in HTML and working paper for download in PDF available via World Wide Web at the Social Science Research Network.Title from cover. "January 2002."Includes bibliographical references (leaves 32-34).The goal of this paper is to develop some computational experience and test the practical relevance of the theory of condition numbers C(d) for linear optimization, as applied to problem instances that one might encounter in practice. We used the NETLIB suite of linear optimization problems as a test bed for condition number computation and analysis. Our computational results indicate that 72% of the NETLIB suite problem instances are ill-conditioned. However, after pre-processing heuristics are applied, only 19% of the post-processed problem instances are ill-conditioned, and log C(d) of the finitely-conditioned post-processed problems is fairly nicely distributed. We also show that the number of IPM iterations needed to solve the problems in the NETLIB suite varies roughly linearly (and monotonically) with log C(d) of the post-processed problem instances. Empirical evidence yields a positive linear relationship between IPM iterations and log C(d) for the post-processed problem instances, significant at the 95% confidence level. Furthermore, 42% of the variation in IPM iterations among the NETLIB suite problem instances is accounted for by log C(d) of the problem instances after pre-processing. Keywords: Convex Optimization, Complexity, Interior-Point Method, Barrier Method.Fernando Ordonez [and] Robert M. Freund
Robust Linear Optimization with Recourse: Solution Methods and Other Properties.
The unifying theme of this dissertation is robust optimization; the study of solving
certain types of convex robust optimization problems and the study of bounds
on the distance to ill-posedness for certain types of robust optimization problems.
Robust optimization has recently emerged as a new modeling paradigm designed
to address data uncertainty in mathematical programming problems by finding an
optimal solution for the worst-case instances of unknown, but bounded, parameters.
Parameters in practical problems are not known exactly for many reasons: measurement
errors, round-off computational errors, even forecasting errors, which created
a need for a robust approach. The advantages of robust optimization are two-fold:
guaranteed feasible solutions against the considered data instances and not requiring
the exact knowledge of the underlying probability distribution, which are limitations
of chance-constraint and stochastic programming. Adjustable robust optimization,
an extension of robust optimization, aims to solve mathematical programming problems where the data is uncertain and sets of decisions can be made at different points in time, thus producing solutions that are less conservative in nature than those produced by robust optimization.
This dissertation has two main contributions: presenting a cutting-plane method
for solving convex adjustable robust optimization problems and providing preliminary
results for determining the relationship between the conditioning of a robust
linear program under structured transformations and the conditioning of the equivalent
second-order cone program under structured perturbations. The proposed algorithm
is based on Kelley's method and is discussed in two contexts: a general convex
optimization problem and a robust linear optimization problem with recourse under
right-hand side uncertainty. The proposed algorithm is then tested on two different
robust linear optimization problems with recourse: a newsvendor problem with
simple recourse and a production planning problem with general recourse, both under
right-hand side uncertainty. Computational results and analyses are provided.
Lastly, we provide bounds on the distance to infeasibility for a second-order cone program
that is equivalent to a robust counterpart under ellipsoidal uncertainty in terms
of quantities involving the data defining the ellipsoid in the robust counterpart.Ph.D.Industrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64714/1/tlterry_1.pd
- …