55 research outputs found
On the Power and Limitations of Affine Policies in Two-Stage Adaptive Optimization
We consider a two-stage adaptive linear optimization problem under right hand side uncertainty with a minâmax objective and give a sharp characterization of the power and limitations of affine policies (where the second stage solution is an affine function of the right hand side uncertainty). In particular, we show that the worst-case cost of an optimal affine policy can be Omega(m12â) times the worst-case cost of an optimal fully-adaptable solution for any delta > 0, where m is the number of linear constraints. We also show that the worst-case cost of the best affine policy is O(m) times the optimal cost when the first-stage constraint matrix has non-negative coefficients. Moreover, if there are only k †m uncertain parameters, we generalize the performance bound for affine policies to O(k) , which is particularly useful if only a few parameters are uncertain. We also provide an O(k) -approximation algorithm for the general case without any restriction on the constraint matrix but the solution is not an affine function of the uncertain parameters. We also give a tight characterization of the conditions under which an affine policy is optimal for the above model. In particular, we show that if the uncertainty set, R+m is a simplex, then an affine policy is optimal. However, an affine policy is suboptimal even if is a convex combination of only (m + 3) extreme points (only two more extreme points than a simplex) and the worst-case cost of an optimal affine policy can be a factor (2 â delta) worse than the worst-case cost of an optimal fully-adaptable solution for any delta > 0.National Science Foundation (U.S.) (NSF Grants DMI-0556106)National Science Foundation (U.S.) (EFRI-0735905
Towards Machine Wald
The past century has seen a steady increase in the need of estimating and
predicting complex systems and making (possibly critical) decisions with
limited information. Although computers have made possible the numerical
evaluation of sophisticated statistical models, these models are still designed
\emph{by humans} because there is currently no known recipe or algorithm for
dividing the design of a statistical model into a sequence of arithmetic
operations. Indeed enabling computers to \emph{think} as \emph{humans} have the
ability to do when faced with uncertainty is challenging in several major ways:
(1) Finding optimal statistical models remains to be formulated as a well posed
problem when information on the system of interest is incomplete and comes in
the form of a complex combination of sample data, partial knowledge of
constitutive relations and a limited description of the distribution of input
random variables. (2) The space of admissible scenarios along with the space of
relevant information, assumptions, and/or beliefs, tend to be infinite
dimensional, whereas calculus on a computer is necessarily discrete and finite.
With this purpose, this paper explores the foundations of a rigorous framework
for the scientific computation of optimal statistical estimators/models and
reviews their connections with Decision Theory, Machine Learning, Bayesian
Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty
Quantification and Information Based Complexity.Comment: 37 page
Incremental proximal methods for large scale convex optimization
Laboratory for Information and Decision Systems Report LIDS-P-2847We consider the minimization of a sumâm [over]i=1 fi (x) consisting of a large
number of convex component functions fi . For this problem, incremental methods
consisting of gradient or subgradient iterations applied to single components have
proved very effective. We propose new incremental methods, consisting of proximal
iterations applied to single components, as well as combinations of gradient, subgradient,
and proximal iterations. We provide a convergence and rate of convergence
analysis of a variety of such methods, including some that involve randomization in
the selection of components.We also discuss applications in a few contexts, including
signal processing and inference/machine learning.United States. Air Force Office of Scientific Research (grant FA9550-10-1-0412
Ordered Incidence geometry and the geometric foundations of convexity theory
An Ordered Incidence Geometry, that is a geometry with certain axioms of incidence and order, is proposed as a minimal setting for the fundamental convexity theorems, which usually appear in the context of a linear vector space, but require only incidence, order (and for separation, completeness), and none of the linear structure of a vector space.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/42995/1/22_2005_Article_BF01227810.pd
- âŠ