6 research outputs found

    The L-shaped method for large-scale mixed-integer waste management decision making problems

    Get PDF
    It is without a doubt that deciding upon strategic issues requires us to somehow anticipate and consider possible variations of the future. Unfortunately, when it comes to the actual modelling, the sheer size of the problems that accurately describe the uncertainty is often extremely hard to work with. This paper aims to describe a possible way of dealing with the issue of large-scale mixed integer models (in term of the number of possible future scenarios it can handle) for the studied waste management decision making problem. The algorithm is based on the idea of decomposing the overall problem alongside the different scenarios and solving these smaller problems instead. The use of the algorithm is demonstrated on a strategic waste management problem of choosing the optimal sites to build new incineration plants, while minimizing the expected cost of waste transport and processing. The uncertainty was modelled by 5,000 scenarios and the problem was solved to high accuracy using relatively modest means (in terms of computational power and needed software)

    Efficient solution selection for two-stage stochastic programs

    Get PDF
    Sampling-based stochastic programs are extensively applied in practice. However, the resulting models tend to be computationally challenging. A reasonable number of samples needs to be identified to represent the random data, and a group of approximate models can then be constructed using such a number of samples. These approximate models can produce a set of potential solutions for the original model. In this paper, we consider the problem of allocating a finite computational budget among numerous potential solutions of a two-stage linear stochastic program, which aims to identify the best solution among potential ones by conducting simulation under a given computational budget. We propose a two-stage heuristic approach to solve the computational resource allocation problem. First, we utilise a Wasserstein-based screening rule to remove potentially inferior solutions from the simulation. Next, we use a ranking and selection technique to efficiently collect performance information of the remaining solutions. The performance of our approach is demonstrated through well-known benchmark problems. Results show that our method provides good trade-offs between computational effort and solution performance

    Incremental bundle methods using upper models

    Get PDF
    We propose a family of proximal bundle methods for minimizing sum-structured convex nondifferentiable functions which require two slightly uncommon assumptions, that are satisfied in many relevant applications: Lipschitz continuity of the functions and oracles which also produce upper estimates on the function values. In exchange, the methods: i) use upper models of the functions that allow to estimate function values at points where the oracle has not been called; ii) provide the oracles with more information about when the function computation can be interrupted, possibly diminishing their cost; iii) allow to skip oracle calls entirely for some of the component functions, not only at ``null steps'' but also at ``serious steps''; iv) provide explicit and reliable a-posteriori estimates of the quality of the obtained solutions; v) work with all possible combinations of different assumptions on the oracles. We also discuss introduction of constraints (or, more generally, of easy components) and use of (partly) aggregated models

    Standard Bundle Methods: Untrusted Models and Duality

    Get PDF
    We review the basic ideas underlying the vast family of algorithms for nonsmooth convex optimization known as "bundle methods|. In a nutshell, these approaches are based on constructing models of the function, but lack of continuity of first-order information implies that these models cannot be trusted, not even close to an optimum. Therefore, many different forms of stabilization have been proposed to try to avoid being led to areas where the model is so inaccurate as to result in almost useless steps. In the development of these methods, duality arguments are useful, if not outright necessary, to better analyze the behaviour of the algorithms. Also, in many relevant applications the function at hand is itself a dual one, so that duality allows to map back algorithmic concepts and results into a "primal space" where they can be exploited; in turn, structure in that space can be exploited to improve the algorithms' behaviour, e.g. by developing better models. We present an updated picture of the many developments around the basic idea along at least three different axes: form of the stabilization, form of the model, and approximate evaluation of the function
    corecore