7 research outputs found

    MODELOS Y MÉTODOS DE OPTIMIZACIÓN LINEAL CON INCERTIDUMBRE: UNA BREVE REVISIÓN DEL ESTADO DEL ARTE

    Get PDF
    In the modeling of many problems on linear optimization is not possible to consider the classic deterministic model because the set of parameters is not fully known due to the significant variation of the data along time or because there is no uniformity on the values. These kind of problems are known as problems with uncertainty and there are different approaches about modeling and methods of solution to resolve them. In this paper we make a review of such approaches focusing basically in stochastic optimization, fuzzy optimization, intervaling optimization and hybrid optimization. The difference between these approaches is perceived in the nature of the data, notions of feasibility and optimality and computational requirements, among others.En la modelación de muchos problemas de optimización lineal no es posible considerar el modelo clásico determinista, porque el conjunto de los parámetros no son completamente conocidos debido a que los datos varian en forma significativa a lo largo del tiempo o porque no hay homogeneidad en los valores.Estos problemas son conocidos como problemas con incertidumbre, para los cuales existen diversos enfoques en la modelación y en los métodos de solución. En este artículo se revisa tales enfoques, incidiendo fundamentalmente en la optimización estocástica, optimización difusa, optimización intervalar y optimización híbrida. La diferencia entre estos enfoques se perciben en la naturaleza de los datos, nociones de factibilidad y optimalidad, requerimientos computacionales, entre otros

    Codifferentials and Quasidifferentials of the Expectation of Nonsmooth Random Integrands and Two-Stage Stochastic Programming

    Full text link
    This work is devoted to an analysis of exact penalty functions and optimality conditions for nonsmooth two-stage stochastic programming problems. To this end, we first study the co-/quasi-differentiability of the expectation of nonsmooth random integrands and obtain explicit formulae for its co- and quasidifferential under some natural assumptions on the integrand. Then we analyse exact penalty functions for a variational reformulation of two-stage stochastic programming problems and obtain sufficient conditions for the global exactness of these functions with two different penalty terms. In the end of the paper, we combine our results on the co-/quasi-differentiability of the expectation of nonsmooth random integrands and exact penalty functions to derive optimality conditions for nonsmooth two-stage stochastic programming problems in terms of codifferentials

    Efficient solution selection for two-stage stochastic programs

    Get PDF
    Sampling-based stochastic programs are extensively applied in practice. However, the resulting models tend to be computationally challenging. A reasonable number of samples needs to be identified to represent the random data, and a group of approximate models can then be constructed using such a number of samples. These approximate models can produce a set of potential solutions for the original model. In this paper, we consider the problem of allocating a finite computational budget among numerous potential solutions of a two-stage linear stochastic program, which aims to identify the best solution among potential ones by conducting simulation under a given computational budget. We propose a two-stage heuristic approach to solve the computational resource allocation problem. First, we utilise a Wasserstein-based screening rule to remove potentially inferior solutions from the simulation. Next, we use a ranking and selection technique to efficiently collect performance information of the remaining solutions. The performance of our approach is demonstrated through well-known benchmark problems. Results show that our method provides good trade-offs between computational effort and solution performance

    Quasi-Monte Carlo methods for linear two-stage stochastic programming problems

    Get PDF
    Quasi-Monte Carlo algorithms are studied for generating scenarios to solve two-stage linear stochastic programming problems. Their integrands are piecewise linear-quadratic, but do not belong to the function spaces consideredfor QMC error analysis. We show that under some weak geometric condition on the two-stage model all terms of their ANOVA decomposition, except the one of highest order, are continuously differentiable and second order mixed derivativesexist almost everywhere and belong to L2L_2. This implies that randomly shifted latticerules may achieve the optimal rate of convergence O(n−1+δ)O(n^{-1+\delta}) with δ∈(0,12]\delta \in (0,\frac{1}{2}] and a constant not depending on the dimension if the effective superposition dimension is less than or equal to two. The geometric condition is shown to be satisfied for almost all covariance matrices if the underlying probability distribution isnormal. We discuss effective dimensions and techniques for dimension reduction.Numerical experiments for a production planning model with normal inputs showthat indeed convergence rates close to the optimal rate are achieved when usingrandomly shifted lattice rules or scrambled Sobol' point sets accompanied withprincipal component analysis for dimension reduction

    Efficient information collection in stochastic optimisation

    Get PDF
    This thesis focuses on a class of information collection problems in stochastic optimisation. Algorithms in this area often need to measure the performances of several potential solutions, and use the collected information in their search for high-performance solutions, but only have a limited budget for measuring. A simple approach that allocates simulation time equally over all potential solutions may waste time in collecting additional data for the alternatives that can be quickly identified as non-promising. Instead, algorithms should amend their measurement strategy to iteratively examine the statistical evidences collected thus far and focus computational efforts on the most promising alternatives. This thesis develops new efficient methods of collecting information to be used in stochastic optimisation problems. First, we investigate an efficient measurement strategy used for the solution selection procedure of two-stage linear stochastic programs. In the solution selection procedure, finite computational resources must be allocated among numerous potential solutions to estimate their performances and identify the best solution. We propose a two-stage sampling approach that exploits a Wasserstein-based screening rule and an optimal computing budget allocation technique to improve the efficiency of obtaining a high-quality solution. Numerical results show our method provides good trade-offs between computational effort and solution performance. Then, we address the information collection problems that are encountered in the search for robust solutions. Specifically, we use an evolutionary strategy to solve a class of simulation optimisation problems with computationally expensive blackbox functions. We implement an archive sample approximation method to ix reduce the required number of evaluations. The main challenge in the application of this method is determining the locations of additional samples drawn in each generation to enrich the information in the archive and minimise the approximation error. We propose novel sampling strategies by using the Wasserstein metric to estimate the possible benefit of a potential sample location on the approximation error. An empirical comparison with several previously proposed archive-based sample approximation methods demonstrates the superiority of our approaches. In the final part of this thesis, we propose an adaptive sampling strategy for the rollout algorithm to solve the clinical trial scheduling and resource allocation problem under uncertainty. The proposed sampling strategy method exploits the variance reduction technique of common random numbers and the empirical Bernstein inequality in a statistical racing procedure, which can balance the exploration and exploitation of the rollout algorithm. Moreover, we present an augmented approach that utilises a heuristic-based grouping rule to enhance the simulation efficiency by breaking down the overall action selection problem into a selection problem involving small groups. The numerical results show that the proposed method provides competitive results within a reasonable amount of computational time

    Stochastic programming and agent-based simulation approaches for epidemics control and logistics planning

    Get PDF
    This dissertation addresses the resource allocation challenges of fighting against infectious disease outbreaks. The goal of this dissertation is to formulate multi-stage stochastic programming and agent-based models to address the limitations of former literature in optimizing resource allocation for preventing and controlling epidemics and pandemics. In the first study, a multi-stage stochastic programming compartmental model is presented to integrate the uncertain disease progression and the logistics of resource allocation to control a highly contagious infectious disease. The proposed multi-stage stochastic program, which involves various disease growth scenarios, optimizes the distribution of treatment centers and resources while minimizing the total expected number of new infections and funerals due to an epidemic. Two new equity metrics are defined and formulated, namely infection and capacity equity, to explicitly consider equity for allocating treatment funds and facilities for fair resource allocation in epidemics control. The multi-stage value of the stochastic solution (VSS), demonstrating the superiority of the proposed stochastic programming model over its deterministic counterpart, is studied. The first model is applied to the Ebola Virus Disease (EVD) case in West Africa, including Guinea, Sierra Leone, and Liberia. In the following study, the previous model is extended to a mean-risk multi-stage vaccine allocation model to capture the influence of the outbreak scenarios with low probability but high impact. The Conditional Value at Risk (CVaR) measure used in the model enables a trade-off between the weighted expected loss due to the outbreak and expected risks associated with experiencing disastrous epidemic scenarios. A method is developed to estimate the migration rate between each infected region when limited migration data is available. The second study is applied to the case of EVD in the Democratic Republic of the Congo. In the third study, a new risk-averse multi-stage stochastic epidemics-ventilator-logistics compartmental stochastic programming model is developed to address the resource allocation challenges of mitigating COVID-19. This epidemiological logistics model involves the uncertainty of untested asymptomatic infections and incorporates short-term human migration. Disease transmission is also forecasted through deriving a new formulation of transmission rates that evolve over space and time with respect to various non-pharmaceutical interventions, such as wearing masks, social distancing, and lockdown. In the fourth study, a simulation-optimization approach is introduced to address the vaccination facility location and allocation challenges of the COVID-19 vaccines. A detailed agent-based simulation model of the COVID-19 is extended and integrated with a new vaccination center and vaccine-allocation optimization model. The proposed agent-based simulation-optimization framework simulates the disease transmission first and then minimizes the total number of infections over all the considered regions by choosing the optimal vaccine center locations and vaccine allocation to those centers. Specifically, the simulation provides the number of susceptible and infected individuals in each geographical region for the current time period as an input into the optimization model. The optimization model then minimizes the total number of estimated infections and provides the new vaccine center locations and vaccine allocation decisions for the following time period. Decisions are made on where to open vaccination centers and how many people should be vaccinated at each future stage in each region of the considered geographical location. Then these optimal decision values are imported back into the simulation model to simulate the number of susceptible and infected individuals for the subsequent periods. The agent-based simulation-optimization framework is applied to controlling COVID-19 in the states of New Jersey. The results provide insights into the optimal vaccine center location and vaccine allocation problem under varying budgets and vaccine types while foreseeing potential epidemic growth scenarios over time and spatial locations

    Approximation d’espérances conditionnelles guidée par le problème en optimisation stochastique multi-étapes

    Get PDF
    RÉSUMÉ: Dans cette thèse, nous considérons d’une façon générale la résolution de problèmes d’optimisation stochastique multi-étapes. Ces derniers apparaissent dans de nombreux domaines d’application tels que la finance, l’énergie, la logistique, le transport, la santé, etc. Ils sont généralement insolubles de façon exacte car ils contiennent des espérances mathématiques qui ne peuvent pas être calculées analytiquement. Il est donc nécessaire de considérer pour cela des méthodes numériques. Nous nous intéressons particulièrement aux méthodes de génération d’arbres de scénarios. Ceux-ci remplacent le processus stochastique sous-jacent au problème afin de ramener ce dernier à une taille raisonnable permettant sa résolution pratique. Numériquement, cela permet de remplacer les opérateurs d’espérance qui apparaissent dans la formulation originale du problème (et qui tiennent compte de toutes les scénarios possibles en les pondérant avec une certaine densité de probabilité), par des sommes finies qui, pour leur part, ne prennent en compte qu’un sous-ensemble de scénarios seulement. Cette approximation permet ensuite à un ordinateur de résoudre le problème discrétisé à l’aide de solveurs classiques d’optimisation. L’arbre de scénarios doit satisfaire un compromis entre la qualité d’approximation, qui voudrait que l’arbre soit le plus grand possible, et la complexité de résolution du problème discrétisé qui, à l’inverse, voudrait qu’il soit le plus petit possible. Alors que ce compromis est relativement facile à satisfaire pour les problèmes à deux étapes, il l’est beaucoup moins pour les problèmes multi-étapes (c.-à-d. à partir de trois étapes). Ceci est dû à la nécessité de considérer des structures d’arbres dont la taille (le nombre de noeuds) croît exponentiellement avec le nombre d’étapes. Dans ce contexte multi-étapes, la recherche d’un compromis satisfaisant entre qualité et complexité a mené la communauté d’optimisation stochastique à développer de nombreuses approches de génération d’arbres de scénarios basées sur des justifications théoriques ou pratiques différentes. Ces justifications portent essentiellement sur la qualité d’approximation du processus stochastique par l’arbre de scénarios. Pour cette raison, ces approches sont dites guidées par la distribution, étant donné qu’elles souhaitent reproduire le mieux possible –suivant leur propre critère de qualité– la distribution du processus stochastique (ou certaines propriétés de celle-ci). Prendre en compte la distribution permet sous certaines conditions assez faibles d’assurer la consistance de la méthode de résolution. Pour cette raison, ces méthodes sont utilisées avec succès dans de nombreux problèmes. Cependant, cette stratégie ne permet pas de tirer profit de la structure même du problème d’optimisation, par exemple la variabilité de sa fonction objectif ou l’influence de ses contraintes, qui joue aussi un rôle important dans la qualité d’approximation. La prise en compte de ces caractéristiques permettrait de construire des arbres de scénarios plus adaptés aux problèmes et ainsi de satisfaire un meilleur compromis entre qualité et complexité. En pratique, cela permettrait de pouvoir résoudre des problèmes avec un plus grand nombre d’étapes.----------ABSTRACT: In this thesis, we consider solution methods for general multistage stochastic optimization problems. Such problems arise in many fields of application, including finance, energy, logistic, transportation, health care, etc. They generally do not have closed-form solutions since they feature mathematical expectations, which cannot be computed exactly in most applications. For this reason, it is necessary to consider solutions through numerical methods. One of them, which is the focus of this thesis, is the scenario-tree generation approach. Its aim is to substitute the underlying stochastic process with a finite subset of scenarios so as to replace the conditional expectations with their finite sum estimators. This reduces the size of the problem, which is then solved using some generic optimization solvers. The generation of scenario trees is subject to a trade-off between the approximation accuracy and the complexity of the resulting discretized problem. The former tends to increase the number of scenarios, whereas the latter tends to decrease it. This trade-off turns out to be fairly easy to satisfy when dealing with two-stage problems. However, it becomes much more difficult when problems are multistage, that is, when they have 3 stages of more. This stems from the fact that multistage problems require specific tree structures whose size (the number of nodes) grow exponentially as the number of stages increases. For this reason, a lot of attention has been drawn on generating scenario trees in the multistage setting. Many methods have been developed based on different theoretical or practical grounds. Most of them can be described as distribution-driven, as they aim at approximating the distribution of the stochastic process (or some features of it), according to their own idea of what a good approximation is. The distribution-driven strategy allows to have consistent scenario-tree estimators under some weak conditions. For this reason, these methods have been successfully applied to many problems. However, it does not allow to capitalize on some specific features of the multistage problem (e.g., the variability of its revenue function or the influence of its constraints), although they play an important role in the scenario-tree approximation quality as well. Taking them into account would lead to more suitable scenario trees that may satisfy a better trade-off between accuracy and complexity. This, in turn, may allow to consider problems with more stages. In this thesis, we introduce a new problem-driven scenario-tree generation approach. It takes into account the whole structure of the optimization problem through its stochastic process, revenue (or cost) function and sets of constraints. This approach is developed in a general setting of multistage problems, hence it is not tied to a particular application or field of applications. The conditions that are introduced along the lines of this thesis about the revenue function, constraints, or probability distribution essentially aims at making sure that the problems is mathematically well-defined
    corecore