49 research outputs found

    Assessing Solution Quality in Stochastic Programs

    Get PDF
    Determining whether a solution is of high quality (optimal or near optimal) is a fundamental question in optimization theory and algorithms. In this paper, we develop Monte Carlo sampling-based procedures for assessing solution quality in stochastic programs. Quality is defined via the optimality gap and our procedures' output is a confidence interval on this gap. We review a multiple-replications procedure that requires solution of, say, 30 optimization problems and then, we present a result that justifies a computationally simplified single-replication procedure that only requires solving one optimization problem. Even though the single replication procedure is computationally significantly less demanding, the resulting confidence interval might have low coverage probability for small sample sizes for some problems. We provide variants of this procedure that require two replications instead of one and that perform better empirically. We present computational results for a newsvendor problem and for two-stage stochastic linear programs from the literature. We also discuss when the procedures perform well an when they fail and provide preliminary guidelines for selecting a candidate solution

    Approximations of Semicontinuous Functions with Applications to Stochastic Optimization and Statistical Estimation

    Get PDF
    Upper semicontinuous (usc) functions arise in the analysis of maximization problems, distributionally robust optimization, and function identification, which includes many problems of nonparametric statistics. We establish that every usc function is the limit of a hypo-converging sequence of piecewise affine functions of the difference-of-max type and illustrate resulting algorithmic possibilities in the context of approximate solution of infinite-dimensional optimization problems. In an effort to quantify the ease with which classes of usc functions can be approximated by finite collections, we provide upper and lower bounds on covering numbers for bounded sets of usc functions under the Attouch-Wets distance. The result is applied in the context of stochastic optimization problems defined over spaces of usc functions. We establish confidence regions for optimal solutions based on sample average approximations and examine the accompanying rates of convergence. Examples from nonparametric statistics illustrate the results

    Monte Carlo Estimation for Imprecise Probabilities: Basic Properties

    Get PDF
    We describe Monte Carlo methods for estimating lower envelopes of expectations of real random variables. We prove that the estimation bias is negative and that its absolute value shrinks with increasing sample size. We discuss fairly practical techniques for proving strong consistency of the estimators and use these to prove the consistency of an example in the literature. We also provide an example where there is no consistency

    Validating Sample Average Approximation Solutions with Negatively Dependent Batches

    Full text link
    Sample-average approximations (SAA) are a practical means of finding approximate solutions of stochastic programming problems involving an extremely large (or infinite) number of scenarios. SAA can also be used to find estimates of a lower bound on the optimal objective value of the true problem which, when coupled with an upper bound, provides confidence intervals for the true optimal objective value and valuable information about the quality of the approximate solutions. Specifically, the lower bound can be estimated by solving multiple SAA problems (each obtained using a particular sampling method) and averaging the obtained objective values. State-of-the-art methods for lower-bound estimation generate batches of scenarios for the SAA problems independently. In this paper, we describe sampling methods that produce negatively dependent batches, thus reducing the variance of the sample-averaged lower bound estimator and increasing its usefulness in defining a confidence interval for the optimal objective value. We provide conditions under which the new sampling methods can reduce the variance of the lower bound estimator, and present computational results to verify that our scheme can reduce the variance significantly, by comparison with the traditional Latin hypercube approach

    Data-driven scenario generation for two-stage stochastic programming

    Get PDF
    Optimisation under uncertainty has always been a focal point within the Process Systems Engineering (PSE) research agenda. In particular, the efficient manipulation of large amount of data for the uncertain parameters constitutes a crucial condition for effectively tackling stochastic programming problems. In this context, this work proposes a new data-driven Mixed-Integer Linear Programming (MILP) model for the Distribution & Moment Matching Problem (DMP). For cases with multiple uncertain parameters a copula-based simulation of initial scenarios is employed as preliminary step. Moreover, the integration of clustering methods and DMP in the proposed model is shown to enhance computational performance. Finally, we compare the proposed approach with state-of-the-art scenario generation methodologies. Through a number of case studies we highlight the benefits regarding the quality of the generated scenario trees by evaluating the corresponding obtained stochastic solutions

    05031 Abstracts Collection -- Algorithms for Optimization with Incomplete Information

    Get PDF
    From 16.01.05 to 21.01.05, the Dagstuhl Seminar 05031 ``Algorithms for Optimization with Incomplete Information\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Bounding Optimality Gap in Stochastic Optimization via Bagging: Statistical Efficiency and Stability

    Full text link
    We study a statistical method to estimate the optimal value, and the optimality gap of a given solution for stochastic optimization as an assessment of the solution quality. Our approach is based on bootstrap aggregating, or bagging, resampled sample average approximation (SAA). We show how this approach leads to valid statistical confidence bounds for non-smooth optimization. We also demonstrate its statistical efficiency and stability that are especially desirable in limited-data situations, and compare these properties with some existing methods. We present our theory that views SAA as a kernel in an infinite-order symmetric statistic, which can be approximated via bagging. We substantiate our theoretical findings with numerical results
    corecore