10 research outputs found

    A gradient formula for linear chance constraints under Gaussian distribution

    Get PDF
    We provide an explicit gradient formula for linear chance constraints under a (possibly singular) multivariate Gaussian distribution. This formula allows one to reduce the calculus of gradients to the calculus of values of the same type of chance constraints (in smaller dimension and with different distribution parameters). This is an important aspect for the numerical solution of stochastic optimization problems because existing efficient codes for e.g., calculating singular Gaussian distributions or regular Gaussian probabilities of polyhedra can be employed to calculate gradients at the same time. Moreover, the precision of gradients can be controlled by that of function values which is a great advantage over using finite difference approximations. Finally, higher order derivatives are easily derived explicitly. The use of the obtained formula is illustrated for an example of a transportation network with stochastic demands

    A mixed-integer stochastic nonlinear optimization problem with joint probabilistic constraints

    Get PDF
    We illustrate the solution of a mixed-integer stochastic nonlinear optimization problem in an application of power management. In this application, a coupled system consisting of a hydro power station and a wind farm is considered. The objective is to satisfy the local energy demand and sell any surplus energy on a spot market for a short time horizon. Generation of wind energy is assumed to be random, so that demand satisfaction is modeled by a joint probabilistic constraint taking into accountthe multivariate distribution. The turbine is forced to either operate between given positive limits or to be shut down. This introduces additional binary decisions. The numerical solution procedure is presented and results are illustrated

    Gradient formulae for nonlinear probabilistic constraints with Gaussian and Gaussian-like distributions

    Get PDF
    Probabilistic constraints represent a major model of stochastic optimization. A possible approach for solving probabilistically constrained optimization problems consists in applying nonlinear programming methods. In order to do so, one has to provide sufficiently precise approximations for values and gradients of probability functions. For linear probabilistic constraints under Gaussian distribution this can be successfully done by analytically reducing these values and gradients to values of Gaussian distribution functions and computing the latter, for instance, by Genz' code. For nonlinear models one may fall back on the spherical-radial decomposition of Gaussian random vectors and apply, for instance, De'ak's sampling scheme for the uniform distribution on the sphere in order to compute values of corresponding probability functions. The present paper demonstrates how the same sampling scheme can be used in order to simultaneously compute gradients of these probability functions. More precisely, we prove a formula representing these gradients in the Gaussian case as a certain integral over the sphere again. Later, the result is extended to alternative distributions with an emphasis on the multivariate Student (or T-) distribution

    Probabilistic Solutions of Conditional Optimization Problems

    Full text link
    Optimization problems with random parameters are studied. The traditional approach to their solution consists in finding a deterministic solution satisfying a certain criterion: optimization of the expected value of the objective function, optimization of the probability of attaining a certain level, or optimization of the quantile. In this review paper, we consider a solution of a stochastic optimization problem in the form of a random vector (or a random set). This is a relatively new class of problems, which is called "probabilistic optimization problems." It is noted that the application of probabilistic solutions in problems with random parameters is justified in the cases of multiple decision makers. Probabilistic optimization problems arise, for example, in the analysis of multicriteria problems; in this case, the weight coefficients of the importance of criteria are regarded as a random vector. We consider important examples of economic-mathematical models, which are optimization problems with a large number of decision makers: the problem of optimal choice based on the consumer's preference function, the route selection problem based on the optimization of the generalized cost of the trip, and the securities portfolio problem with a distribution of the investors' risk tolerance. Mathematical statements of these problems are given in the form of problems of probabilistic optimization. Some properties of the constructed models are studied; in particular, the expected value of the probabilistic solution of an optimization problem is analyzed. © 2020 Krasovskii Institute of Mathematics and Mechanics. All rights reserved

    A gradient formula for linear chance constraints under Gaussian distribution

    Get PDF
    We provide an explicit gradient formula for linear chance constraints under a (possibly singular) multivariate Gaussian distribution. This formula allows one to reduce the calculus of gradients to the calculus of values of the same type of chance constraints (in smaller dimension and with different distribution parameters). This is an important aspect for the numerical solution of stochastic optimization problems because existing efficient codes for e.g., calculating singular Gaussian distributions or regular Gaussian probabilities of polyhedra can be employed to calculate gradients at thesame time. Moreover, the precision of gradients can be controlled by that of function values, which is a great advantage over using finite difference approximations. Finally, higher order derivatives are easily derived explicitly. The use of the obtained formula is illustrated for an example of a stochastic transportation network
    corecore