458 research outputs found

    Output analysis for approximated stochastic programs

    Get PDF
    Because of incomplete information and also for the sake of numerical tractability one mostly solves an approximated stochastic program instead of the underlying ''true'' decision problem. However, without an additional analysis, the obtained output (the optimal value and optimal solutions of the approximated stochastic program) should not be used to replace the sought solution of the ''true'' problem. Methods of output analysis have to be tailored to the structure of the problem and they should also reflect the source, character and precision of the input data. The scope of various approaches based on results of asymptotic and robust statistics, of the moment problem and on general results of parametric programming will be discussed from the point of view of their applicability and possible extensions

    Assessing policy quality in multi-stage stochastic programming

    Get PDF
    Solving a multi-stage stochastic program with a large number of scenarios and a moderate-to-large number of stages can be computationally challenging. We develop two Monte Carlo-based methods that exploit special structures to generate feasible policies. To establish the quality of a given policy, we employ a Monte Carlo-based lower bound (for minimization problems) and use it to construct a confidence interval on the policy's optimality gap. The confidence interval can be formed in a number of ways depending on how the expected solution value of the policy is estimated and combined with the lower-bound estimator. Computational results suggest that a confidence interval formed by a tree-based gap estimator may be an effective method for assessing policy quality. Variance reduction is achieved by using common random numbers in the gap estimator

    Near-optimal loop tiling by means of cache miss equations and genetic algorithms

    Get PDF
    The effectiveness of the memory hierarchy is critical for the performance of current processors. The performance of the memory hierarchy can be improved by means of program transformations such as loop tiling, which is a code transformation targeted to reduce capacity misses. This paper presents a novel systematic approach to perform near-optimal loop tiling based on an accurate data locality analysis (cache miss equations) and a powerful technique to search the solution space that is based on a genetic algorithm. The results show that this approach can remove practically all capacity misses for all considered benchmarks. The reduction of replacement misses results in a decrease of the miss ratio that can be as significant as a factor of 7 for the matrix multiply kernel.Peer ReviewedPostprint (published version

    Mitigating Uncertainty via Compromise Decisions in Two-stage Stochastic Linear Programming

    Get PDF
    Stochastic Programming (SP) has long been considered as a well-justified yet computationally challenging paradigm for practical applications. Computational studies in the literature often involve approximating a large number of scenarios by using a small number of scenarios to be processed via deterministic solvers, or running Sample Average Approximation on some genre of high performance machines so that statistically acceptable bounds can be obtained. In this paper we show that for a class of stochastic linear programming problems, an alternative approach known as Stochastic Decomposition (SD) can provide solutions of similar quality, in far less computational time using ordinary desktop or laptop machines of today. In addition to these compelling computational results, we also provide a stronger convergence result for SD, and introduce a new solution concept which we refer to as the compromise decision. This new concept is attractive for algorithms which call for multiple replications in sampling-based convex optimization algorithms. For such replicated optimization, we show that the difference between an average solution and a compromise decision provides a natural stopping rule. Finally our computational results cover a variety of instances from the literature, including a detailed study of SSN, a network planning instance which is known to be more challenging than other test instances in the literature

    A Multicut Approach to Compute Upper Bounds for Risk-Averse SDDP

    Full text link
    Stochastic Dual Dynamic Programming (SDDP) is a widely used and fundamental algorithm for solving multistage stochastic optimization problems. Although SDDP has been frequently applied to solve risk-averse models with the Conditional Value-at-Risk (CVaR), it is known that the estimation of upper bounds is a methodological challenge, and many methods are computationally intensive. In practice, this leaves most SDDP implementations without a practical and clear stopping criterion. In this paper, we propose using the information already contained in a multicut formulation of SDDP to solve this problem with a simple and computationally efficient methodology. The multicut version of SDDP, in contrast with the typical average cut, preserves the information about which scenarios give rise to the worst costs, thus contributing to the CVaR value. We use this fact to modify the standard sampling method on the forward step so the average of multiple paths approximates the nested CVaR cost. We highlight that minimal changes are required in the SDDP algorithm and there is no additional computational burden for a fixed number of iterations. We present multiple case studies to empirically demonstrate the effectiveness of the method. First, we use a small hydrothermal dispatch test case, in which we can write the deterministic equivalent of the entire scenario tree to show that the method perfectly computes the correct objective values. Then, we present results using a standard approximation of the Brazilian operation problem and a real hydrothermal dispatch case based on data from Colombia. Our numerical experiments showed that this method consistently calculates upper bounds higher than lower bounds for those risk-averse problems and that lower bounds are improved thanks to the better exploration of the scenarios tree

    Sample Average Approximations of Strongly Convex Stochastic Programs in Hilbert Spaces

    Full text link
    We analyze the tail behavior of solutions to sample average approximations (SAAs) of stochastic programs posed in Hilbert spaces. We require that the integrand be strongly convex with the same convexity parameter for each realization. Combined with a standard condition from the literature on stochastic programming, we establish non-asymptotic exponential tail bounds for the distance between the SAA solutions and the stochastic program's solution, without assuming compactness of the feasible set. Our assumptions are verified on a class of infinite-dimensional optimization problems governed by affine-linear partial differential equations with random inputs. We present numerical results illustrating our theoretical findings.Comment: 20 pages, 4 figure

    Power management in a hydro-thermal system under uncertainty by Lagrangian relaxation

    Get PDF
    We present a dynamic multistage stochastic programming model for the cost-optimal generation of electric power in a hydro-thermal system under uncertainty in load, inflow to reservoirs and prices for fuel and delivery contracts. The stochastic load process is approximated by a scenario tree obtained by adapting a SARIMA model to historical data, using empirical means and variances of simulated scenarios to construct an initial tree, and reducing it by a scenario deletion procedure based on a suitable probability distance. Our model involves many mixed-integer variables and individual power unit constraints, but relatively few coupling constraints. Hence we employ stochastic Lagrangian relaxation that assigns stochastic multipliers to the coupling constraints. Solving the Lagarangian dual by a proximal bundle method leads to successive decomposition into single thermal and hydro unit subproblems that are solved by dynamic programming and a specialized descent algorithm, respectively. The optimal stochastic multipliers are used in Lagrangian heuristics to construct approximately optimal first stage decisions. Numerical results are presented for realistic data from a German power utility, with a time horizon of one week and scenario numbers ranging from 5 to 100. The corresponding optimization problems have up to 200,000 binary and 350,000 continuous variables, and more than 500,000 constraints

    Sampling-Based Algorithms for Two-Stage Stochastic Programs and Applications

    Get PDF
    In this dissertation, we present novel sampling-based algorithms for solving two-stage stochastic programming problems. Sampling-based methods provide an efficient approach to solving large-scale stochastic programs where uncertainty is possibly defined on continuous support. When sampling-based methods are employed, the process is usually viewed in two steps - sampling and optimization. When these two steps are performed in sequence, the overall process can be computationally very expensive. In this dissertation, we utilize the framework of internal-sampling where sampling and optimization steps are performed concurrently. The dissertation comprises of two parts. In the first part, we design a new sampling technique for solving two-stage stochastic linear programs with continuous recourse. We incorporate this technique within an internal-sampling framework of stochastic decomposition. In the second part of the dissertation, we design an internal-sampling-based algorithm for solving two-stage stochastic mixed-integer programs with continuous recourse. We design a new stochastic branch-and-cut procedure for solving this class of optimization problems. Finally, we show the efficiency of this method for solving large-scale practical problems arising in logistics and finance

    Global Changes: Facets of Robust Decisions

    Get PDF
    The aim of this paper is to provide an overview of existing concepts of robustness and to identify promising directions for coping with uncertainty and risks of global changes. Unlike statistical robustness, general decision problems may have rather different facets of robustness. In particular, a key issue is the sensitivity with respect to low-probability catastrophic events. That is, robust decisions in the presence of catastrophic events are fundamentally different from decisions ignoring them. Specifically, proper treatment of extreme catastrophic events requires new sets of feasible decisions, adjusted to risk performance indicators, and new spatial, social and temporal dimensions. The discussion is deliberately kept at a level comprehensible to a broad audience through the use of simple examples that can be extended to rather general models. In fact, these examples often illustrate fragments of models that are being developed at IIASA
    • …
    corecore