83 research outputs found
Wasserstein robust combinatorial optimization problems
This paper discusses a class of combinatorial optimization problems with
uncertain costs in the objective function. It is assumed that a sample of the
cost realizations is available, which defines an empirical probability
distribution for the random cost vector. A Wasserstein ball, centered at the
empirical distribution, is used to define an ambiguity set of probability
distributions. A solution minimizing the Conditional Value at Risk for a worst
probability distribution in the Wasserstein ball is computed. The complexity of
the problem is investigated. Exact and approximate solution methods for various
support sets are proposed. Some known results for the Wasserstein robust
shortest path problem are generalized and refined
A study of distributionally robust mixed-integer programming with Wasserstein metric: on the value of incomplete data
This study addresses a class of linear mixed-integer programming (MILP)
problems that involve uncertainty in the objective function parameters. The
parameters are assumed to form a random vector, whose probability distribution
can only be observed through a finite training data set. Unlike most of the
related studies in the literature, we also consider uncertainty in the
underlying data set. The data uncertainty is described by a set of linear
constraints for each random sample, and the uncertainty in the distribution
(for a fixed realization of data) is defined using a type-1 Wasserstein ball
centered at the empirical distribution of the data. The overall problem is
formulated as a three-level distributionally robust optimization (DRO) problem.
First, we prove that the three-level problem admits a single-level MILP
reformulation, if the class of loss functions is restricted to biaffine
functions. Secondly, it turns out that for several particular forms of data
uncertainty, the outlined problem can be solved reasonably fast by leveraging
the nominal MILP problem. Finally, we conduct a computational study, where the
out-of-sample performance of our model and computational complexity of the
proposed MILP reformulation are explored numerically for several application
domains
Recommended from our members
Approximation schemes for network, clustering and queueing models
In this dissertation, we consider important optimization problems that arise in three different domains, namely network models, clustering problems and queueing models. To be more specific, we focus on devising efficient traffic routing models, deriving exact convex reformulation to the well-known K-means clustering problem and studying the classical Naor’s observable queues under uncertain parameters. In the following chapters, we discuss these problems in detail, design efficient and tractable solution methodologies, and assess the quality of proposed solutions. In the first part of the dissertation, we analyze a limited-adaptability traffic routing model for the Austin road network. Routing a person through a traffic network presents a tension between selecting a fixed route that is easy to navigate and selecting an aggressively adaptive route that minimizes the expected travel time. We develop non-aggressive adaptive routes in the middle-ground seeking the best of both these extremes. Specifically, these routes still adapt to changing traffic condition, however we limit the total number of allowable adjustments. This improves the user experience, by providing a continuum of options between saving travel time and minimizing navigation. We design strategies to model single and multiple route adjustments, and investigate enumerative techniques to solve these models. We also develop tractable algorithms with easily computable lower and upper bounds to handle real-size traffic data. We finally present the numerical results highlighting the benefit of different levels of adaptability in terms of reducing the expected travel time. In the second part of the dissertation, we study the well-known classical K-means clustering problem. We show that the popular K-means clustering problem can equivalently be reformulated as a conic program of polynomial size. The arising convex optimization problem is NP-hard, but amenable to a tractable semidefinite programming (SDP) relaxation that is tighter than the current SDP relaxation schemes in the literature. In contrast to the existing schemes, our proposed SDP formulation gives rise to solutions that can be leveraged to identify the clusters. We devise a new approximation algorithm for K-means clustering that utilizes the improved formulation and empirically illustrate its superiority over the state-of-the-art solution schemes. Finally, we study an extension of Naor’s analysis [74] on the joining or balking problem in observable M/M/1 queues, relaxing the principal assumption of deterministic arrival and service rates. While all the Markovian assumptions still hold, we assume the arrival and service rates are uncertain and study this problem under stochastic and distributionally robust settings. In the former setting, the exact rates are unknown but we assume the distribution of rates are known to all the decision makers. We derive the optimal joining threshold strategies from the perspective of an individual customer, a social optimizer and a revenue maximizer, such that expected profit rate is maximized. In the distributionally robust setting, we go a step further to assume the true distributions are unknown and the decision makers have access to only a finite set of training samples. Similar to the stochastic setting, we derive optimal thresholds such that the worst-case expected profit rates are maximized. Finally, we compare our observations, both theoretically and numerically, with Naor’s classical results.Operations Research and Industrial Engineerin
Data-driven Distributionally Robust Optimization over Time
Stochastic Optimization (SO) is a classical approach for optimization under
uncertainty that typically requires knowledge about the probability
distribution of uncertain parameters. As the latter is often unknown,
Distributionally Robust Optimization (DRO) provides a strong alternative that
determines the best guaranteed solution over a set of distributions (ambiguity
set). In this work, we present an approach for DRO over time that uses online
learning and scenario observations arriving as a data stream to learn more
about the uncertainty. Our robust solutions adapt over time and reduce the cost
of protection with shrinking ambiguity. For various kinds of ambiguity sets,
the robust solutions converge to the SO solution. Our algorithm achieves the
optimization and learning goals without solving the DRO problem exactly at any
step. We also provide a regret bound for the quality of the online strategy
which converges at a rate of , where is the
number of iterations. Furthermore, we illustrate the effectiveness of our
procedure by numerical experiments on mixed-integer optimization instances from
popular benchmark libraries and give practical examples stemming from
telecommunications and routing. Our algorithm is able to solve the DRO over
time problem significantly faster than standard reformulations
- …