16,751 research outputs found

    Conditional Risk Mappings

    Get PDF
    We introduce an axiomatic definition of a conditional convex risk mapping. By employing the techniques of conjugate duality we derive properties of conditional risk mappings. In particular, we prove a representation theorem for conditional risk mappings in terms of conditional expectations. We also develop dynamic programming relations for multistage optimization problems involving conditional risk mappings.Risk, Convex Analysis, Conjugate Duality, Stochastic Optimization, Dynamic Programming, Multi-Stage Programming

    Markov risk mappings and risk-sensitive optimal stopping

    Full text link
    In contrast to the analytic approach to risk for Markov chains based on transition risk mappings, we introduce a probabilistic setting based on a novel concept of regular conditional risk mapping with Markov update rule. We confirm that the Markov property holds for the standard measures of risk used in practice such as Value at Risk and Average Value at Risk. We analyse the dual representation for convex Markovian risk mappings and a representation in terms of their acceptance sets. The Markov property is formulated in several equivalent versions including a strong version, opening up additional risk-sensitive optimisation problems such as optimal stopping with exercise lag and optimal prediction. We demonstrate how such problems can be reduced to a risk-sensitive optimal stopping problem with intermediate costs, and derive the dynamic programming equations for the latter. Finally, we show how our results can be extended to partially observable Markov processes.Comment: 29 pages. New: extension of one-step ahead Markov property to entire "future", Markov property in terms of acceptance sets, VaR and AVaR examples, convex Markov risk mappings, application to optimal stopping with exercise lag. Notable changes: Stopping cost in the partially observable optimal stopping problem can depend on the unobservable stat

    Semi-proximal Mirror-Prox for Nonsmooth Composite Minimization

    Get PDF
    We propose a new first-order optimisation algorithm to solve high-dimensional non-smooth composite minimisation problems. Typical examples of such problems have an objective that decomposes into a non-smooth empirical risk part and a non-smooth regularisation penalty. The proposed algorithm, called Semi-Proximal Mirror-Prox, leverages the Fenchel-type representation of one part of the objective while handling the other part of the objective via linear minimization over the domain. The algorithm stands in contrast with more classical proximal gradient algorithms with smoothing, which require the computation of proximal operators at each iteration and can therefore be impractical for high-dimensional problems. We establish the theoretical convergence rate of Semi-Proximal Mirror-Prox, which exhibits the optimal complexity bounds, i.e. O(1/ϵ2)O(1/\epsilon^2), for the number of calls to linear minimization oracle. We present promising experimental results showing the interest of the approach in comparison to competing methods

    Optimization of Risk Measures

    Get PDF
    We consider optimization problems involving coherent risk measures. We derive necessary and sufficient conditions of optimality for these problems, and we discuss the nature of the nonanticipativity constraints. Next, we introdice dynamic risk measures, and we formulate multistage optimization problems involving these measures. Conditions similar to dynamic programming equations are developed. The theoretical considerations are illustrated with many examples of mean-risk models applied in practice.risk measures, mean-risk models, duality, optimization, dynamic programming
    corecore