1,426 research outputs found
Data-driven Distributionally Robust Optimization Using the Wasserstein Metric: Performance Guarantees and Tractable Reformulations
We consider stochastic programs where the distribution of the uncertain
parameters is only observable through a finite training dataset. Using the
Wasserstein metric, we construct a ball in the space of (multivariate and
non-discrete) probability distributions centered at the uniform distribution on
the training samples, and we seek decisions that perform best in view of the
worst-case distribution within this Wasserstein ball. The state-of-the-art
methods for solving the resulting distributionally robust optimization problems
rely on global optimization techniques, which quickly become computationally
excruciating. In this paper we demonstrate that, under mild assumptions, the
distributionally robust optimization problems over Wasserstein balls can in
fact be reformulated as finite convex programs---in many interesting cases even
as tractable linear programs. Leveraging recent measure concentration results,
we also show that their solutions enjoy powerful finite-sample performance
guarantees. Our theoretical results are exemplified in mean-risk portfolio
optimization as well as uncertainty quantification.Comment: 42 pages, 10 figure
Semi-supervised Learning based on Distributionally Robust Optimization
We propose a novel method for semi-supervised learning (SSL) based on
data-driven distributionally robust optimization (DRO) using optimal transport
metrics. Our proposed method enhances generalization error by using the
unlabeled data to restrict the support of the worst case distribution in our
DRO formulation. We enable the implementation of our DRO formulation by
proposing a stochastic gradient descent algorithm which allows to easily
implement the training procedure. We demonstrate that our Semi-supervised DRO
method is able to improve the generalization error over natural supervised
procedures and state-of-the-art SSL estimators. Finally, we include a
discussion on the large sample behavior of the optimal uncertainty region in
the DRO formulation. Our discussion exposes important aspects such as the role
of dimension reduction in SSL
Distributionally Robust Quickest Change Detection using Wasserstein Uncertainty Sets
The problem of quickest detection of a change in the distribution of a
sequence of independent observations is considered. It is assumed that the
pre-change distribution is known (accurately estimated), while the only
information about the post-change distribution is through a (small) set of
labeled data. This post-change data is used in a data-driven minimax robust
framework, where an uncertainty set for the post-change distribution is
constructed using the Wasserstein distance from the empirical distribution of
the data. The robust change detection problem is studied in an asymptotic
setting where the mean time to false alarm goes to infinity, for which the
least favorable post-change distribution within the uncertainty set is the one
that minimizes the Kullback-Leibler divergence between the post- and the
pre-change distributions. It is shown that the density corresponding to the
least favorable distribution is an exponentially tilted version of the
pre-change density and can be calculated efficiently. A Cumulative Sum (CuSum)
test based on the least favorable distribution, which is referred to as the
distributionally robust (DR) CuSum test, is then shown to be asymptotically
robust. The results are extended to the case where the post-change uncertainty
set is a finite union of multiple Wasserstein uncertainty sets, corresponding
to multiple post-change scenarios, each with its own labeled data. The proposed
method is validated using synthetic and real data examples
Distributionally Robust Optimization: A Review
The concepts of risk-aversion, chance-constrained optimization, and robust
optimization have developed significantly over the last decade. Statistical
learning community has also witnessed a rapid theoretical and applied growth by
relying on these concepts. A modeling framework, called distributionally robust
optimization (DRO), has recently received significant attention in both the
operations research and statistical learning communities. This paper surveys
main concepts and contributions to DRO, and its relationships with robust
optimization, risk-aversion, chance-constrained optimization, and function
regularization
- …