1,252 research outputs found

    Modelling and solution methods for stochastic optimisation

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.In this thesis we consider two research problems, namely, (i) language constructs for modelling stochastic programming (SP) problems and (ii) solution methods for processing instances of different classes of SP problems. We first describe a new design of an SP modelling system which provides greater extensibility and reuse. We implement this enhanced system and develop solver connections. We also investigate in detail the following important classes of SP problems: singlestage SP with risk constraints, two-stage linear and stochastic integer programming problems. We report improvements to solution methods for single-stage problems with second-order stochastic dominance constraints and two-stage SP problems. In both cases we use the level method as a regularisation mechanism. We also develop novel heuristic methods for stochastic integer programming based on variable neighbourhood search. We describe an algorithmic framework for implementing decomposition methods such as the L-shaped method within our SP solver system. Based on this framework we implement a number of established solution algorithms as well as a new regularisation method for stochastic linear programming. We compare the performance of these methods and their scale-up properties on an extensive set of benchmark problems. We also implement several solution methods for stochastic integer programming and report a computational study comparing their performance. The three solution methods, (a) processing of a single-stage problem with second-order stochastic dominance constraints, (b) regularisation by the level method for two-stage SP and (c) method for solving integer SP problems, are novel approaches and each of these makes a contribution to knowledge.Financial support was obtained from OptiRisk Systems

    Single-Source Multi-Period Problem Model with Active Constraints-Based Approach Algorithm

    Get PDF
    In this paper, we introduce the multi-period single sourcing problem as an assignment problem. The multi- period single-sourcing problem in this research is seen as a problem of finding assignments, from time to time to obtain the minimum possible total transportation and inventory costs for distributing goods to customers. The case considered in this problem is the case of placing inventory items that are distributed to customers online, so this case is seen as a non-polynomial or NP hard problem that requires a solution algorithm, and the algorithm we offer is a direct search algorithm to solve the problem. multi period single sourcing. The direct search algorithm offered is the Branch and Price algorithm which was developed for Generalized Assignment Problems (GAP) to a much more complete class of problems, called CAP (Convex Assignment Problems). We offer this algorithm because the results it will obtain are more optimal, the computing time is superior, and it shows greater stability, that is, fewer outliers are observed. Specifically, we generalize the strategy of separating nonbasic variables from their constraints, combined with using active constraint methods to solve the Generalized Assignment Problem (GAP) into a Convex Assignment problem. Then, identification of important subclasses of the problem is carried out, which contains many variations of multi- period single sourcing problems, as well as GAP variants. The final result we found is an active depth-based single source multi-period model that can minimize the damage to the optimal integer solution for solving the MPSSP convex problem

    On modelling planning under uncertainty in manufacturing

    Get PDF
    We present a modelling framework for two-stage and multi-stage mixed 0-1 problems under uncertainty for strategic Supply Chain Management, tactical production planning and operations assignment and scheduling. A scenario tree based scheme is used to represent the uncertainty. We present the Deterministic Equivalent Model of the stochastic mixed 0-1 programs with complete recourse that we study. The constraints are modelled by compact and splitting variable representations via scenarios

    Charging NOx Emitters for Health Damages: An Exploratory Analysis

    Get PDF
    We present a proof-of-concept analysis of the measurement of the health damage of ozone (O3) produced from nitrogen oxides (NOx = NO + NO2) emitted by individual large point sources in the eastern United States. We use a regional atmospheric model of the eastern United States, the Comprehensive Air Quality Model with Extensions (CAMx), to quantify the variable impact that a fixed quantity of NOx emitted from individual sources can have on the downwind concentration of surface O3, depending on temperature and local biogenic hydrocarbon emissions. We also examine the dependence of resulting ozone-related health damages on the size of the exposed population. The investigation is relevant to the increasingly widely used “cap and trade” approach to NOx regulation, which presumes that shifts of emissions over time and space, holding the total fixed over the course of the summer O3 season, will have minimal effect on the environmental outcome. By contrast, we show that a shift of a unit of NOx emissions from one place or time to another could result in large changes in the health effects due to ozone formation and exposure. We indicate how the type of modeling carried out here might be used to attach externality-correcting prices to emissions. Charging emitters fees that are commensurate with the damage caused by their NOx emissions would create an incentive for emitters to reduce emissions at times and in locations where they cause the largest damage.surface ozone, NOx emissions, point sources, health impacts, mortality, morbidity, cap-and-trade

    Decomposition Algorithms in Stochastic Integer Programming: Applications and Computations.

    Get PDF
    In this dissertation we focus on two main topics. Under the first topic, we develop a new framework for stochastic network interdiction problem to address ambiguity in the defender risk preferences. The second topic is dedicated to computational studies of two-stage stochastic integer programs. More specifically, we consider two cases. First, we develop some solution methods for two-stage stochastic integer programs with continuous recourse; second, we study some computational strategies for two-stage stochastic integer programs with integer recourse. We study a class of stochastic network interdiction problems where the defender has incomplete (ambiguous) preferences. Specifically, we focus on the shortest path network interdiction modeled as a Stackelberg game, where the defender (leader) makes an interdiction decision first, then the attacker (follower) selects a shortest path after the observation of random arc costs and interdiction effects in the network. We take a decision-analytic perspective in addressing probabilistic risk over network parameters, assuming that the defender\u27s risk preferences over exogenously given probabilities can be summarized by the expected utility theory. Although the exact form of the utility function is ambiguous to the defender, we assume that a set of historical data on some pairwise comparisons made by the defender is available, which can be used to restrict the shape of the utility function. We use two different approaches to tackle this problem. The first approach conducts utility estimation and optimization separately, by first finding the best fit for a piecewise linear concave utility function according to the available data, and then optimizing the expected utility. The second approach integrates utility estimation and optimization, by modeling the utility ambiguity under a robust optimization framework following \cite{armbruster2015decision} and \cite{Hu}. We conduct extensive computational experiments to evaluate the performances of these approaches on the stochastic shortest path network interdiction problem. In third chapter, we propose partition-based decomposition algorithms for solving two-stage stochastic integer program with continuous recourse. The partition-based decomposition method enhance the classical decomposition methods (such as Benders decomposition) by utilizing the inexact cuts (coarse cuts) induced by a scenario partition. Coarse cut generation can be much less expensive than the standard Benders cuts, when the partition size is relatively small compared to the total number of scenarios. We conduct an extensive computational study to illustrate the advantage of the proposed partition-based decomposition algorithms compared with the state-of-the-art approaches. In chapter four, we concentrate on computational methods for two-stage stochastic integer program with integer recourse. We consider the partition-based relaxation framework integrated with a scenario decomposition algorithm in order to develop strategies which provide a better lower bound on the optimal objective value, within a tight time limit

    Integrated Simulation and Optimization for Decision-Making under Uncertainty with Application to Healthcare

    Get PDF
    Many real applications require decision-making under uncertainty. These decisions occur at discrete points in time, influence future decisions, and have uncertainties that evolve over time. Mean-risk stochastic integer programming (SIP) is one optimization tool for decision problems involving uncertainty. However, it may be challenging to develop a closed-form objective for some problems. Consequently, simulation of the system performance under a combination of conditions becomes necessary. Discrete event system specification (DEVS) is a useful tool for simulation and evaluation, but simulation models do not naturally include a decision-making component. This dissertation develops a novel approach whereby simulation and optimization models interact and exchange information leading to solutions that adapt to changes in system data. The integrated simulation and optimization approach was applied to the scheduling of chemotherapy appointments in an outpatient oncology clinic. First, a simulation of oncology clinic operations, DEVS-CHEMO, was developed to evaluate system performance from the patient and managements perspectives. Four scheduling algorithms were developed for DEVS-CHEMO. Computational results showed that assigning patients to both chairs and nurses improved system performance by reducing appointment duration by 3%, reducing waiting time by 34%, and reducing nurse overtime by 4%. Second, a set of mean-risk SIP models, SIP-CHEMO, was developed to determine the start date and resource assignments for each new patients appointment schedule. SIP-CHEMO considers uncertainty in appointment duration, acuity levels, and resource availability. The SIP-CHEMO models utilize the expected excess and absolute semideviation mean-risk measures. The SIP-CHEMO models increased throughput by 1%, decreased waiting time by 41%, and decreased nurse overtime by 25% when compared to DEVS-CHEMOs scheduling algorithms. Finally, a new framework integrating DEVS and SIP, DEVS-SIP, was developed. The DEVS-CHEMO and SIP-CHEMO models were combined using the DEVS-SIP framework to create DEVS-SIP-CHEMO. Appointment schedules were determined using SIP-CHEMO and implemented in DEVS-CHEMO. If the system performance failed to meet predetermined stopping criteria, DEVS-CHEMO revised SIP-CHEMO and determined a new appointment schedule. Computational results showed that DEVS-SIP-CHEMO is preferred to using simulation or optimization alone. DEVSSIP-CHEMO held throughput within 1% and improved nurse overtime by 90% and waiting time by 36% when compared to SIP-CHEMO alone

    Stochastic Optimization Models for Perishable Products

    Get PDF
    For many years, researchers have focused on developing optimization models to design and manage supply chains. These models have helped companies in different industries to minimize costs, maximize performance while balancing their social and environmental impacts. There is an increasing interest in developing models which optimize supply chain decisions of perishable products. This is mainly because many of the products we use today are perishable, managing their inventory is challenging due to their short shelf life, and out-dated products become waste. Therefore, these supply chain decisions impact profitability and sustainability of companies and the quality of the environment. Perishable products wastage is inevitable when demand is not known beforehand. A number of models in the literature use simulation and probabilistic models to capture supply chain uncertainties. However, when demand distribution cannot be described using standard distributions, probabilistic models are not effective. In this case, using stochastic optimization methods is preferred over obtaining approximate inventory management policies through simulation. This dissertation proposes models to help businesses and non-prot organizations make inventory replenishment, pricing and transportation decisions that improve the performance of their system. These models focus on perishable products which either deteriorate over time or have a fixed shelf life. The demand and/or supply for these products and/or, the remaining shelf life are stochastic. Stochastic optimization models, including a two-stage stochastic mixed integer linear program, a two-stage stochastic mixed integer non linear program, and a chance constraint program are proposed to capture uncertainties. The objective is to minimize the total replenishment costs which impact prots and service rate. These models are motivated by applications in the vaccine distribution supply chain, and other supply chains used to distribute perishable products. This dissertation also focuses on developing solution algorithms to solve the proposed optimization models. The computational complexity of these models motivated the development of extensions to standard models used to solve stochastic optimization problems. These algorithms use sample average approximation (SAA) to represent uncertainty. The algorithms proposed are extensions of the stochastic Benders decomposition algorithm, the L-shaped method (LS). These extensions use Gomory mixed integer cuts, mixed-integer rounding cuts, and piecewise linear relaxation of bilinear terms. These extensions lead to the development of linear approximations of the models developed. Computational results reveal that the solution approach presented here outperforms the standard LS method. Finally, this dissertation develops case studies using real-life data from the Demographic Health Surveys in Niger and Bangladesh to build predictive models to meet requirements for various childhood immunization vaccines. The results of this study provide support tools for policymakers to design vaccine distribution networks

    Comparing Hydrological Postprocessors Including Ensemble Predictions Into Full Predictive Probability Distribution of Streamflow

    Get PDF
    AbstractAlthough not matching the formal definition of the predictive probability distribution, meteorological and hydrological ensembles have been frequently interpreted and directly used to assess flood‐forecasting predictive uncertainty. With the objective of correctly assessing the predictive probability of floods, this paper introduces ways of taking into account the measures of uncertainty provided in the form of ensemble forecasts by modifying a number of well‐established uncertainty postprocessors, such as Bayesian Model Averaging and Model Conditional Processor. The uncertainty postprocessors were developed on the assumption that the future unknown quantity (predictand) is uncertain while model forecasts (predictors) are given, which imply that they are perfectly known. With this in mind, we propose to relax this assumption by considering ensemble predictions, in analogy to measurement errors, as expressions of errors in model predictions to be integrated in the postprocessors coefficients estimation process. The analyses of the methodologies proposed in this work are conducted on a real case study based on meteorological ensemble predictions for the Po River at Pontelagoscuro in Italy. After showing how improper can be the direct use of ensemble predictions to describe the predictive probability distribution, results from the modified postprocessors are compared and discussed

    Contribuciones a la Inferencia Bayesiana Aproximada para Aprendizaje Automático

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Matemáticas, leída el 18-01-2022Machine learning (ML) methods can learn from data and then be used for making predictions on new data instances. However, some of the most popular ML methods cannot provide information about the uncertainty of their predictions, which may be crucial in many applications. The Bayesianframework for ML introduces a natural approach to formulate many ML methods, and it also has the advantage of easily incorporating and reflecting different sources of uncertainty in the final predictive distribution. These sources include uncertainty related to, for example, the data, the model chosen, and its parameters. Moreover, they can be automatically balanced and aggregated using information from the observed data. Nevertheless, in spite of this advantage, exact Bayesian inference is intractable in most ML methods, and approximate inference techniques have to be used in practice. In this thesis we propose a collection of methods for approximate inference, withspecific applications in some popular approaches in supervised ML. First, we introduce neural networks (NNs), from their most basic concepts to some of their mostpopular architectures. Gaussian processes (GPs), a simple but important tool in Bayesian regression, are also reviewed. Sparse GPs are presented as a clever solution to improve GPs’ scalability by introducing new parameters: the inducing points. In the second half of the introductory partwe also describe Bayesian inference and extend the NN formulation using a Bayesian approach, which results in a NN model capable of outputting a predictive distribution. We will see why Bayesian inference is intractable in most ML approaches, and also describe sampling-based and optimization-based methods for approximate inference. The use of -divergences is introduced next, leading to a generalization of certain methods for approximate inference. Finally we will extend the GPs to implicit processes (IPs), a more general class of stochastic processes which provide a flexible framework from which we can define numerous models. Although promising, current IP-based ML methods fail to exploit of all their potential due to the limitations of the approximations required in their formulation...Los métodos de aprendizaje automático o machine learning (ML) son capaces de aprender a partir de datos y producir predicciones para nuevos casos nunca vistos. Sin embargo, algunos de los métodos de ML más usuales son incapaces de informar sobre la incertidumbre de sus predicciones, la cualpuede ser crucial en diversas aplicaciones. La perspectiva Bayesiana proporciona un marco natural para ello, otorgando la capacidad de considerar diversas fuentes de incertidumbre en el análisis y reflejarlas en las distribuciones predictivas finales. Esta incertidumbre puede tener diferentes fuentes, como los datos, la selección del modelo y sus parámetros asociados, las cuales pueden ser adecuadamente pesadas y agregadas usando las herramientas Bayesianas. Sin embargo, para la mayoría de métodos de ML, la inferencia Bayesiana exacta es intratable, y para casos prácticos hay que recurrir a aproximaciones de la misma. En esta tesis se proponen nuevos métodos de inferenciaaproximada, con aplicaciones concretas para algunos de los métodos más populares en ML. En primer lugar introduciremos las redes neuronales (NNs), desde sus fundamentos básicos hasta algunas de sus arquitecturas más comunes, así como los procesos Gaussianos (GPs), herramientas importantes empleadas en diversos problemas de aprendizaje. Además, veremos cómo los sparse GPs alivian los problemas de escalabilidad de los GPs mediante la introducción de un parámetro nuevo: los puntos inducidos. En la segunda mitad de esta introducción describiremos los fundamentosde la inferencia Bayesiana y extenderemos la formulación de las NNs al marco Bayesiano para obtener NNs capaces de producir distribuciones predictivas. Veremos aquí por qué la inferencia Bayesiana es intratable para muchos de los métodos de ML y revisaremos técnicas de aproximación basadas tanto en muestreos como en la optimización de parámetros. Además de esto, veremos las divergencias como una generalización de conceptos empleados en ciertos métodos de inferencia aproximada. Finalmente extenderemos la formulación de los GPs a los procesos implícitos (IPs),una clase más general y flexible de procesos estocásticos desde la cual podremos describir múltiples modelos útiles. Aunque prometedores, los métodos actuales de ML basados en IPs no son capaces de explotar todas sus propiedades debido a las limitaciones de las aproximaciones empleadas. En la segunda parte de la tesis presentaremos nuestras contribuciones al campo de inferencia aproximada, con especial interés para las NNs Bayesianas y los IPs...Fac. de Ciencias MatemáticasTRUEunpu
    corecore