50 research outputs found
A study of distributionally robust mixed-integer programming with Wasserstein metric: on the value of incomplete data
This study addresses a class of linear mixed-integer programming (MILP)
problems that involve uncertainty in the objective function parameters. The
parameters are assumed to form a random vector, whose probability distribution
can only be observed through a finite training data set. Unlike most of the
related studies in the literature, we also consider uncertainty in the
underlying data set. The data uncertainty is described by a set of linear
constraints for each random sample, and the uncertainty in the distribution
(for a fixed realization of data) is defined using a type-1 Wasserstein ball
centered at the empirical distribution of the data. The overall problem is
formulated as a three-level distributionally robust optimization (DRO) problem.
First, we prove that the three-level problem admits a single-level MILP
reformulation, if the class of loss functions is restricted to biaffine
functions. Secondly, it turns out that for several particular forms of data
uncertainty, the outlined problem can be solved reasonably fast by leveraging
the nominal MILP problem. Finally, we conduct a computational study, where the
out-of-sample performance of our model and computational complexity of the
proposed MILP reformulation are explored numerically for several application
domains
Data-driven Distributionally Robust Optimization Using the Wasserstein Metric: Performance Guarantees and Tractable Reformulations
We consider stochastic programs where the distribution of the uncertain
parameters is only observable through a finite training dataset. Using the
Wasserstein metric, we construct a ball in the space of (multivariate and
non-discrete) probability distributions centered at the uniform distribution on
the training samples, and we seek decisions that perform best in view of the
worst-case distribution within this Wasserstein ball. The state-of-the-art
methods for solving the resulting distributionally robust optimization problems
rely on global optimization techniques, which quickly become computationally
excruciating. In this paper we demonstrate that, under mild assumptions, the
distributionally robust optimization problems over Wasserstein balls can in
fact be reformulated as finite convex programs---in many interesting cases even
as tractable linear programs. Leveraging recent measure concentration results,
we also show that their solutions enjoy powerful finite-sample performance
guarantees. Our theoretical results are exemplified in mean-risk portfolio
optimization as well as uncertainty quantification.Comment: 42 pages, 10 figure