665 research outputs found
Cooperative Data-Driven Distributionally Robust Optimization
We study a class of multiagent stochastic optimization problems where the objective is to minimize the expected value of a function which depends on a random variable. The probability distribution of the random variable is unknown to the agents. The agents aim to cooperatively find, using their collected data, a solution with guaranteed out-of-sample performance. The approach is to formulate a data-driven distributionally robust optimization problem using Wasserstein ambiguity sets, which turns out to be equivalent to a convex program. We reformulate the latter as a distributed optimization problem and identify a convex-concave augmented Lagrangian, whose saddle points are in correspondence with the optimizers, provided a min-max interchangeability criteria is met. Our distributed algorithm design, then consists of the saddle-point dynamics associated to the augmented Lagrangian. We formally establish that the trajectories converge asymptotically to a saddle point and, hence, an optimizer of the problem. Finally, we identify classes of functions that meet the min-max interchangeability criteria
Data-driven Distributionally Robust Optimization Using the Wasserstein Metric: Performance Guarantees and Tractable Reformulations
We consider stochastic programs where the distribution of the uncertain
parameters is only observable through a finite training dataset. Using the
Wasserstein metric, we construct a ball in the space of (multivariate and
non-discrete) probability distributions centered at the uniform distribution on
the training samples, and we seek decisions that perform best in view of the
worst-case distribution within this Wasserstein ball. The state-of-the-art
methods for solving the resulting distributionally robust optimization problems
rely on global optimization techniques, which quickly become computationally
excruciating. In this paper we demonstrate that, under mild assumptions, the
distributionally robust optimization problems over Wasserstein balls can in
fact be reformulated as finite convex programs---in many interesting cases even
as tractable linear programs. Leveraging recent measure concentration results,
we also show that their solutions enjoy powerful finite-sample performance
guarantees. Our theoretical results are exemplified in mean-risk portfolio
optimization as well as uncertainty quantification.Comment: 42 pages, 10 figure
Robust risk aggregation with neural networks
We consider settings in which the distribution of a multivariate random
variable is partly ambiguous. We assume the ambiguity lies on the level of the
dependence structure, and that the marginal distributions are known.
Furthermore, a current best guess for the distribution, called reference
measure, is available. We work with the set of distributions that are both
close to the given reference measure in a transportation distance (e.g. the
Wasserstein distance), and additionally have the correct marginal structure.
The goal is to find upper and lower bounds for integrals of interest with
respect to distributions in this set. The described problem appears naturally
in the context of risk aggregation. When aggregating different risks, the
marginal distributions of these risks are known and the task is to quantify
their joint effect on a given system. This is typically done by applying a
meaningful risk measure to the sum of the individual risks. For this purpose,
the stochastic interdependencies between the risks need to be specified. In
practice the models of this dependence structure are however subject to
relatively high model ambiguity. The contribution of this paper is twofold:
Firstly, we derive a dual representation of the considered problem and prove
that strong duality holds. Secondly, we propose a generally applicable and
computationally feasible method, which relies on neural networks, in order to
numerically solve the derived dual problem. The latter method is tested on a
number of toy examples, before it is finally applied to perform robust risk
aggregation in a real world instance.Comment: Revised version. Accepted for publication in "Mathematical Finance
Distributionally Robust Optimization for Sequential Decision Making
The distributionally robust Markov Decision Process (MDP) approach asks for a
distributionally robust policy that achieves the maximal expected total reward
under the most adversarial distribution of uncertain parameters. In this paper,
we study distributionally robust MDPs where ambiguity sets for the uncertain
parameters are of a format that can easily incorporate in its description the
uncertainty's generalized moment as well as statistical distance information.
In this way, we generalize existing works on distributionally robust MDP with
generalized-moment-based and statistical-distance-based ambiguity sets to
incorporate information from the former class such as moments and dispersions
to the latter class that critically depends on empirical observations of the
uncertain parameters. We show that, under this format of ambiguity sets, the
resulting distributionally robust MDP remains tractable under mild technical
conditions. To be more specific, a distributionally robust policy can be
constructed by solving a sequence of one-stage convex optimization subproblems
- …