973 research outputs found

    Distributionally Robust Optimization for Sequential Decision Making

    Full text link
    The distributionally robust Markov Decision Process (MDP) approach asks for a distributionally robust policy that achieves the maximal expected total reward under the most adversarial distribution of uncertain parameters. In this paper, we study distributionally robust MDPs where ambiguity sets for the uncertain parameters are of a format that can easily incorporate in its description the uncertainty's generalized moment as well as statistical distance information. In this way, we generalize existing works on distributionally robust MDP with generalized-moment-based and statistical-distance-based ambiguity sets to incorporate information from the former class such as moments and dispersions to the latter class that critically depends on empirical observations of the uncertain parameters. We show that, under this format of ambiguity sets, the resulting distributionally robust MDP remains tractable under mild technical conditions. To be more specific, a distributionally robust policy can be constructed by solving a sequence of one-stage convex optimization subproblems

    A distributionally robust perspective on uncertainty quantification and chance constrained programming

    Get PDF
    The objective of uncertainty quantification is to certify that a given physical, engineering or economic system satisfies multiple safety conditions with high probability. A more ambitious goal is to actively influence the system so as to guarantee and maintain its safety, a scenario which can be modeled through a chance constrained program. In this paper we assume that the parameters of the system are governed by an ambiguous distribution that is only known to belong to an ambiguity set characterized through generalized moment bounds and structural properties such as symmetry, unimodality or independence patterns. We delineate the watershed between tractability and intractability in ambiguity-averse uncertainty quantification and chance constrained programming. Using tools from distributionally robust optimization, we derive explicit conic reformulations for tractable problem classes and suggest efficiently computable conservative approximations for intractable ones

    Data-driven Distributionally Robust Optimization Using the Wasserstein Metric: Performance Guarantees and Tractable Reformulations

    Full text link
    We consider stochastic programs where the distribution of the uncertain parameters is only observable through a finite training dataset. Using the Wasserstein metric, we construct a ball in the space of (multivariate and non-discrete) probability distributions centered at the uniform distribution on the training samples, and we seek decisions that perform best in view of the worst-case distribution within this Wasserstein ball. The state-of-the-art methods for solving the resulting distributionally robust optimization problems rely on global optimization techniques, which quickly become computationally excruciating. In this paper we demonstrate that, under mild assumptions, the distributionally robust optimization problems over Wasserstein balls can in fact be reformulated as finite convex programs---in many interesting cases even as tractable linear programs. Leveraging recent measure concentration results, we also show that their solutions enjoy powerful finite-sample performance guarantees. Our theoretical results are exemplified in mean-risk portfolio optimization as well as uncertainty quantification.Comment: 42 pages, 10 figure

    Data-Driven Chance Constrained Optimization under Wasserstein Ambiguity Sets

    Get PDF
    We present a data-driven approach for distributionally robust chance constrained optimization problems (DRCCPs). We consider the case where the decision maker has access to a finite number of samples or realizations of the uncertainty. The chance constraint is then required to hold for all distributions that are close to the empirical distribution constructed from the samples (where the distance between two distributions is defined via the Wasserstein metric). We first reformulate DRCCPs under data-driven Wasserstein ambiguity sets and a general class of constraint functions. When the feasibility set of the chance constraint program is replaced by its convex inner approximation, we present a convex reformulation of the program and show its tractability when the constraint function is affine in both the decision variable and the uncertainty. For constraint functions concave in the uncertainty, we show that a cutting-surface algorithm converges to an approximate solution of the convex inner approximation of DRCCPs. Finally, for constraint functions convex in the uncertainty, we compare the feasibility set with other sample-based approaches for chance constrained programs.Comment: A shorter version is submitted to the American Control Conference, 201
    corecore