1,986 research outputs found
Optimistic Robust Optimization With Applications To Machine Learning
Robust Optimization has traditionally taken a pessimistic, or worst-case
viewpoint of uncertainty which is motivated by a desire to find sets of optimal
policies that maintain feasibility under a variety of operating conditions. In
this paper, we explore an optimistic, or best-case view of uncertainty and show
that it can be a fruitful approach. We show that these techniques can be used
to address a wide variety of problems. First, we apply our methods in the
context of robust linear programming, providing a method for reducing
conservatism in intuitive ways that encode economically realistic modeling
assumptions. Second, we look at problems in machine learning and find that this
approach is strongly connected to the existing literature. Specifically, we
provide a new interpretation for popular sparsity inducing non-convex
regularization schemes. Additionally, we show that successful approaches for
dealing with outliers and noise can be interpreted as optimistic robust
optimization problems. Although many of the problems resulting from our
approach are non-convex, we find that DCA or DCA-like optimization approaches
can be intuitive and efficient
Bootstrap Robust Prescriptive Analytics
We address the problem of prescribing an optimal decision in a framework
where its cost depends on uncertain problem parameters that need to be
learned from data. Earlier work by Bertsimas and Kallus (2014) transforms
classical machine learning methods that merely predict from supervised
training data into prescriptive methods
taking optimal decisions specific to a particular covariate context .
Their prescriptive methods factor in additional observed contextual information
on a potentially large number of covariates to take context specific
actions which are superior to any static decision . Any naive
use of limited training data may, however, lead to gullible decisions
over-calibrated to one particular data set. In this paper, we borrow ideas from
distributionally robust optimization and the statistical bootstrap of Efron
(1982) to propose two novel prescriptive methods based on (nw) Nadaraya-Watson
and (nn) nearest-neighbors learning which safeguard against overfitting and
lead to improved out-of-sample performance. Both resulting robust prescriptive
methods reduce to tractable convex optimization problems and enjoy a limited
disappointment on bootstrap data. We illustrate the data-driven decision-making
framework and our novel robustness notion on a small news vendor problem as
well as a small portfolio allocation problem
Sinkhorn Distributionally Robust Optimization
We study distributionally robust optimization (DRO) with Sinkhorn distance --
a variant of Wasserstein distance based on entropic regularization. We derive
convex programming dual reformulation for a general nominal distribution.
Compared with Wasserstein DRO, it is computationally tractable for a larger
class of loss functions, and its worst-case distribution is more reasonable for
practical applications. To solve the dual reformulation, we develop a
stochastic mirror descent algorithm using biased gradient oracles and analyze
its convergence rate. Finally, we provide numerical examples using synthetic
and real data to demonstrate its superior performance.Comment: 56 pages, 8 figure
- β¦