230 research outputs found
Distributionally robust optimization with applications to risk management
Many decision problems can be formulated as mathematical optimization models. While deterministic
optimization problems include only known parameters, real-life decision problems
almost invariably involve parameters that are subject to uncertainty. Failure to take this
uncertainty under consideration may yield decisions which can lead to unexpected or even
catastrophic results if certain scenarios are realized.
While stochastic programming is a sound approach to decision making under uncertainty, it
assumes that the decision maker has complete knowledge about the probability distribution
that governs the uncertain parameters. This assumption is usually unjustified as, for most
realistic problems, the probability distribution must be estimated from historical data and
is therefore itself uncertain. Failure to take this distributional modeling risk into account
can result in unduly optimistic risk assessment and suboptimal decisions. Furthermore, for
most distributions, stochastic programs involving chance constraints cannot be solved using
polynomial-time algorithms.
In contrast to stochastic programming, distributionally robust optimization explicitly accounts
for distributional uncertainty. In this framework, it is assumed that the decision maker has
access to only partial distributional information, such as the first- and second-order moments
as well as the support. Subsequently, the problem is solved under the worst-case distribution
that complies with this partial information. This worst-case approach effectively immunizes
the problem against distributional modeling risk.
The objective of this thesis is to investigate how robust optimization techniques can be used
for quantitative risk management. In particular, we study how the risk of large-scale derivative
portfolios can be computed as well as minimized, while making minimal assumptions about
the probability distribution of the underlying asset returns. Our interest in derivative portfolios
stems from the fact that careless investment in derivatives can yield large losses or even
bankruptcy. We show that by employing robust optimization techniques we are able to capture
the substantial risks involved in derivative investments. Furthermore, we investigate how
distributionally robust chance constrained programs can be reformulated or approximated as
tractable optimization problems. Throughout the thesis, we aim to derive tractable models
that are scalable to industrial-size problems
Scalable First-Order Methods for Robust MDPs
Robust Markov Decision Processes (MDPs) are a powerful framework for modeling
sequential decision-making problems with model uncertainty. This paper proposes
the first first-order framework for solving robust MDPs. Our algorithm
interleaves primal-dual first-order updates with approximate Value Iteration
updates. By carefully controlling the tradeoff between the accuracy and cost of
Value Iteration updates, we achieve an ergodic convergence rate of for the best
choice of parameters on ellipsoidal and Kullback-Leibler -rectangular
uncertainty sets, where and is the number of states and actions,
respectively. Our dependence on the number of states and actions is
significantly better (by a factor of ) than that of pure
Value Iteration algorithms. In numerical experiments on ellipsoidal uncertainty
sets we show that our algorithm is significantly more scalable than
state-of-the-art approaches. Our framework is also the first one to solve
robust MDPs with -rectangular KL uncertainty sets
Robust optimization methods for chance constrained, simulation-based, and bilevel problems
The objective of robust optimization is to find solutions that are immune to the uncertainty of the parameters in a mathematical optimization problem. It requires that the constraints of a given problem should be satisfied for all realizations of the uncertain parameters in a so-called uncertainty set. The robust version of a mathematical optimization problem is generally referred to as the robust counterpart problem. Robust optimization is popular because of the computational tractability of the robust counterpart for many classes of uncertainty sets, and its applicability in wide range of topics in practice. In this thesis, we propose robust optimization methodologies for different classes of optimization problems. In Chapter 2, we give a practical guide on robust optimization. In Chapter 3, we propose a new way to construct uncertainty sets for robust optimization using the available historical data information. Chapter 4 proposes a robust optimization approach for simulation-based optimization problems. Finally, Chapter 5 proposes approximations of a specific class of robust and stochastic bilevel optimization problems by using modern robust optimization techniques
- …