1,561 research outputs found
Monte Carlo Bounding Techniques for Determining Solution Quality in Stochastic Programs
Operations Research Letters, 24, pp. 47-56
Assessing policy quality in multi-stage stochastic programming
Solving a multi-stage stochastic program with a large number of scenarios and a moderate-to-large number of stages can be computationally challenging. We develop two Monte Carlo-based methods that exploit special structures to generate feasible policies. To establish the quality of a given policy, we employ a Monte Carlo-based lower bound (for minimization problems) and use it to construct a confidence interval on the policy's optimality gap. The confidence interval can be formed in a number of ways depending on how the expected solution value of the policy is estimated and combined with the lower-bound estimator. Computational results suggest that a confidence interval formed by a tree-based gap estimator may be an effective method for assessing policy quality. Variance reduction is achieved by using common random numbers in the gap estimator
Validating Sample Average Approximation Solutions with Negatively Dependent Batches
Sample-average approximations (SAA) are a practical means of finding
approximate solutions of stochastic programming problems involving an extremely
large (or infinite) number of scenarios. SAA can also be used to find estimates
of a lower bound on the optimal objective value of the true problem which, when
coupled with an upper bound, provides confidence intervals for the true optimal
objective value and valuable information about the quality of the approximate
solutions. Specifically, the lower bound can be estimated by solving multiple
SAA problems (each obtained using a particular sampling method) and averaging
the obtained objective values. State-of-the-art methods for lower-bound
estimation generate batches of scenarios for the SAA problems independently. In
this paper, we describe sampling methods that produce negatively dependent
batches, thus reducing the variance of the sample-averaged lower bound
estimator and increasing its usefulness in defining a confidence interval for
the optimal objective value. We provide conditions under which the new sampling
methods can reduce the variance of the lower bound estimator, and present
computational results to verify that our scheme can reduce the variance
significantly, by comparison with the traditional Latin hypercube approach
Bounding Optimality Gap in Stochastic Optimization via Bagging: Statistical Efficiency and Stability
We study a statistical method to estimate the optimal value, and the
optimality gap of a given solution for stochastic optimization as an assessment
of the solution quality. Our approach is based on bootstrap aggregating, or
bagging, resampled sample average approximation (SAA). We show how this
approach leads to valid statistical confidence bounds for non-smooth
optimization. We also demonstrate its statistical efficiency and stability that
are especially desirable in limited-data situations, and compare these
properties with some existing methods. We present our theory that views SAA as
a kernel in an infinite-order symmetric statistic, which can be approximated
via bagging. We substantiate our theoretical findings with numerical results
GRADIENT-BASED STOCHASTIC OPTIMIZATION METHODS IN BAYESIAN EXPERIMENTAL DESIGN
Optimal experimental design (OED) seeks experiments expected to yield the most useful data for some purpose. In practical circumstances where experiments are time-consuming or resource-intensive, OED can yield enormous savings. We pursue OED for nonlinear systems from a Bayesian perspective, with the goal of choosing experiments that are optimal for parameter inference. Our objective in this context is the expected information gain in model parameters, which in general can only be estimated using Monte Carlo methods. Maximizing this objective thus becomes a stochastic optimization problem. This paper develops gradient-based stochastic optimization methods for the design of experiments on a continuous parameter space. Given a Monte Carlo estimator of expected information gain, we use infinitesimal perturbation analysis to derive gradients of this estimator.We are then able to formulate two gradient-based stochastic optimization approaches: (i) Robbins-Monro stochastic approximation, and (ii) sample average approximation combined with a deterministic quasi-Newton method. A polynomial chaos approximation of the forward model accelerates objective and gradient evaluations in both cases.We discuss the implementation of these optimization methods, then conduct an empirical comparison of their performance. To demonstrate design in a nonlinear setting with partial differential equation forward models, we use the problem of sensor placement for source inversion. Numerical results yield useful guidelines on the choice of algorithm and sample sizes, assess the impact of estimator bias, and quantify tradeoffs of computational cost versus solution quality and robustness.United States. Air Force Office of Scientific Research (Computational Mathematics Program)National Science Foundation (U.S.) (Award ECCS-1128147
The stochastic vehicle routing problem : a literature review, part II : solution methods
Building on the work of Gendreau et al. (Oper Res 44(3):469–477, 1996), and complementing the first part of this survey, we review the solution methods used for the past 20 years in the scientific literature on stochastic vehicle routing problems (SVRP). We describe the methods and indicate how they are used when dealing with stochastic vehicle routing problems. Keywords: vehicle routing (VRP), stochastic programmingm, SVRPpublishedVersio
Sample Complexity of Sample Average Approximation for Conditional Stochastic Optimization
In this paper, we study a class of stochastic optimization problems, referred
to as the \emph{Conditional Stochastic Optimization} (CSO), in the form of
\min_{x \in \mathcal{X}}
\EE_{\xi}f_\xi\Big({\EE_{\eta|\xi}[g_\eta(x,\xi)]}\Big), which finds a wide
spectrum of applications including portfolio selection, reinforcement learning,
robust learning, causal inference and so on. Assuming availability of samples
from the distribution \PP(\xi) and samples from the conditional distribution
\PP(\eta|\xi), we establish the sample complexity of the sample average
approximation (SAA) for CSO, under a variety of structural assumptions, such as
Lipschitz continuity, smoothness, and error bound conditions. We show that the
total sample complexity improves from \cO(d/\eps^4) to \cO(d/\eps^3) when
assuming smoothness of the outer function, and further to \cO(1/\eps^2) when
the empirical function satisfies the quadratic growth condition. We also
establish the sample complexity of a modified SAA, when and are
independent. Several numerical experiments further support our theoretical
findings.
Keywords: stochastic optimization, sample average approximation, large
deviations theoryComment: Typo corrected. Reference added. Revision comments handle
Sampling-Based Algorithms for Two-Stage Stochastic Programs and Applications
In this dissertation, we present novel sampling-based algorithms for solving two-stage stochastic programming problems. Sampling-based methods provide an efficient approach to solving large-scale stochastic programs where uncertainty is possibly defined on continuous support. When sampling-based methods are employed, the process is usually viewed in two steps - sampling and optimization. When these two steps are performed in sequence, the overall process can be computationally very expensive. In this dissertation, we utilize the framework of internal-sampling where sampling and optimization steps are performed concurrently. The dissertation comprises of two parts. In the first part, we design a new sampling technique for solving two-stage stochastic linear programs with continuous recourse. We incorporate this technique within an internal-sampling framework of stochastic decomposition. In the second part of the dissertation, we design an internal-sampling-based algorithm for solving two-stage stochastic mixed-integer programs with continuous recourse. We design a new stochastic branch-and-cut procedure for solving this class of optimization problems. Finally, we show the efficiency of this method for solving large-scale practical problems arising in logistics and finance
- …