9,890 research outputs found
Non-asymptotic confidence bounds for the optimal value of a stochastic program
We discuss a general approach to building non-asymptotic confidence bounds
for stochastic optimization problems. Our principal contribution is the
observation that a Sample Average Approximation of a problem supplies upper and
lower bounds for the optimal value of the problem which are essentially better
than the quality of the corresponding optimal solutions. At the same time, such
bounds are more reliable than "standard" confidence bounds obtained through the
asymptotic approach. We also discuss bounding the optimal value of MinMax
Stochastic Optimization and stochastically constrained problems. We conclude
with a simulation study illustrating the numerical behavior of the proposed
bounds
Non-asymptotic confidence bounds for the optimal value of a stochastic program
International audienceWe discuss a general approach to building non-asymptotic confidence bounds for stochas-tic optimization problems. Our principal contribution is the observation that a Sample Average Approximation of a problem supplies upper and lower bounds for the optimal value of the problem which are essentially better than the quality of the corresponding optimal solutions. At the same time, such bounds are more reliable than " standard " confidence bounds obtained through the asymptotic approach. We also discuss bounding the optimal value of MinMax Stochastic Optimization and stochastically constrained problems. We conclude with a small simulation study illustrating the numerical behavior of the proposed bounds
Bounding Optimality Gap in Stochastic Optimization via Bagging: Statistical Efficiency and Stability
We study a statistical method to estimate the optimal value, and the
optimality gap of a given solution for stochastic optimization as an assessment
of the solution quality. Our approach is based on bootstrap aggregating, or
bagging, resampled sample average approximation (SAA). We show how this
approach leads to valid statistical confidence bounds for non-smooth
optimization. We also demonstrate its statistical efficiency and stability that
are especially desirable in limited-data situations, and compare these
properties with some existing methods. We present our theory that views SAA as
a kernel in an infinite-order symmetric statistic, which can be approximated
via bagging. We substantiate our theoretical findings with numerical results
- …