25,941 research outputs found
A Practical Guide to Robust Optimization
Robust optimization is a young and active research field that has been mainly
developed in the last 15 years. Robust optimization is very useful for
practice, since it is tailored to the information at hand, and it leads to
computationally tractable formulations. It is therefore remarkable that
real-life applications of robust optimization are still lagging behind; there
is much more potential for real-life applications than has been exploited
hitherto. The aim of this paper is to help practitioners to understand robust
optimization and to successfully apply it in practice. We provide a brief
introduction to robust optimization, and also describe important do's and
don'ts for using it in practice. We use many small examples to illustrate our
discussions
Data-driven Inverse Optimization with Imperfect Information
In data-driven inverse optimization an observer aims to learn the preferences
of an agent who solves a parametric optimization problem depending on an
exogenous signal. Thus, the observer seeks the agent's objective function that
best explains a historical sequence of signals and corresponding optimal
actions. We focus here on situations where the observer has imperfect
information, that is, where the agent's true objective function is not
contained in the search space of candidate objectives, where the agent suffers
from bounded rationality or implementation errors, or where the observed
signal-response pairs are corrupted by measurement noise. We formalize this
inverse optimization problem as a distributionally robust program minimizing
the worst-case risk that the {\em predicted} decision ({\em i.e.}, the decision
implied by a particular candidate objective) differs from the agent's {\em
actual} response to a random signal. We show that our framework offers rigorous
out-of-sample guarantees for different loss functions used to measure
prediction errors and that the emerging inverse optimization problems can be
exactly reformulated as (or safely approximated by) tractable convex programs
when a new suboptimality loss function is used. We show through extensive
numerical tests that the proposed distributionally robust approach to inverse
optimization attains often better out-of-sample performance than the
state-of-the-art approaches
Data-driven Distributionally Robust Optimization Using the Wasserstein Metric: Performance Guarantees and Tractable Reformulations
We consider stochastic programs where the distribution of the uncertain
parameters is only observable through a finite training dataset. Using the
Wasserstein metric, we construct a ball in the space of (multivariate and
non-discrete) probability distributions centered at the uniform distribution on
the training samples, and we seek decisions that perform best in view of the
worst-case distribution within this Wasserstein ball. The state-of-the-art
methods for solving the resulting distributionally robust optimization problems
rely on global optimization techniques, which quickly become computationally
excruciating. In this paper we demonstrate that, under mild assumptions, the
distributionally robust optimization problems over Wasserstein balls can in
fact be reformulated as finite convex programs---in many interesting cases even
as tractable linear programs. Leveraging recent measure concentration results,
we also show that their solutions enjoy powerful finite-sample performance
guarantees. Our theoretical results are exemplified in mean-risk portfolio
optimization as well as uncertainty quantification.Comment: 42 pages, 10 figure
On Robust Tie-line Scheduling in Multi-Area Power Systems
The tie-line scheduling problem in a multi-area power system seeks to
optimize tie-line power flows across areas that are independently operated by
different system operators (SOs). In this paper, we leverage the theory of
multi-parametric linear programming to propose algorithms for optimal tie-line
scheduling within a deterministic and a robust optimization framework. Through
a coordinator, the proposed algorithms are proved to converge to the optimal
schedule within a finite number of iterations. A key feature of the proposed
algorithms, besides their finite step convergence, is the privacy of the
information exchanges; the SO in an area does not need to reveal its dispatch
cost structure, network constraints, or the nature of the uncertainty set to
the coordinator. The performance of the algorithms is evaluated using several
power system examples
Lagrangean decomposition for large-scale two-stage stochastic mixed 0-1 problems
In this paper we study solution methods for solving the dual problem corresponding to the Lagrangean Decomposition of two stage stochastic mixed 0-1 models. We represent the two stage stochastic mixed 0-1 problem by a splitting variable representation of the deterministic equivalent model, where 0-1 and continuous variables appear at any stage. Lagrangean Decomposition is proposed for satisfying both the integrality constraints for the 0-1 variables and the non-anticipativity constraints. We compare the performance of four iterative algorithms based on dual Lagrangean Decomposition schemes, as the Subgradient method, the Volume algorithm, the Progressive Hedging algorithm and the Dynamic Constrained Cutting Plane scheme. We test the conditions and properties of convergence for medium and large-scale dimension stochastic problems. Computational results are reported.Progressive Hedging algorithm, volume algorithm, Lagrangean decomposition, subgradient method
- …