218 research outputs found

    Development of new scenario decomposition techniques for linear and nonlinear stochastic programming

    Get PDF
    Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.In the literature of optimization problems under uncertainty a common approach of dealing with two- and multi-stage problems is to use scenario analysis. To do so, the uncertainty of some data in the problem is modeled by stage specific random vectors with finite supports. Each realization is called a scenario. By using scenarios, it is possible to study smaller versions (subproblems) of the underlying problem. As a scenario decomposition technique, the progressive hedging algorithm is one of the most popular methods in multi-stage stochastic programming problems. In spite of full decomposition over scenarios, progressive hedging efficiency is greatly sensitive to some practical aspects, such as the choice of the penalty parameter and handling the quadratic term in the augmented Lagrangian objective function. For the choice of the penalty parameter, we review some of the popular methods, and design a novel adaptive strategy that aims to better follow the algorithm process. Numerical experiments on linear multistage stochastic test problems suggest that most of the existing techniques may exhibit premature convergence to a sub-optimal solution or converge to the optimal solution, but at a very slow rate. In contrast, the new strategy appears to be robust and efficient, converging to optimality in all our experiments and being the fastest in most of them. For the question of handling the quadratic term, we review some existing techniques and we suggest to replace the quadratic term with a linear one. Although this method has yet to be tested, we have the intuition that it will reduce some numerical and theoretical difficulties of progressive hedging in linear problems

    Nonlinear Programming Approaches for Efficient Large-Scale Parameter Estimation with Applications in Epidemiology

    Get PDF
    The development of infectious disease models remains important to provide scientists with tools to better understand disease dynamics and develop more effective control strategies. In this work we focus on the estimation of seasonally varying transmission parameters in infectious disease models from real measles case data. We formulate both discrete-time and continuous-time models and discussed the benefits and shortcomings of both types of models. Additionally, this work demonstrates the flexibility inherent in large-scale nonlinear programming techniques and the ability of these techniques to efficiently estimate transmission parameters even in very large-scale problems. This computational efficiency and flexibility opens the door for investigating many alternative model formulations and encourages use of these techniques for estimation of larger, more complex models like those with age-dependent dynamics, more complex compartment models, and spatially distributed data. However, the size of these problems can become excessively large even for these powerful estimation techniques, and parallel estimation strategies must be explored. Two parallel decomposition approaches are presented that exploited scenario based decomposition and decomposition in time. These approaches show promise for certain types of estimation problems

    Spatial Decomposition for Differential Equation Constrained Stochastic Programs

    Get PDF
    Rozsáhlá třída inženýrských optimalizačních úloh vede na modely s omezeními ve tvaru obyčejných nebo parciálních diferenciálních rovnic (ODR nebo PDR). Protože diferenciálních rovnice je možné řešit analyticky jen v nejjednodušších případech, bylo k řešení použito numerických metod založených na diskretizaci oblasti. Zvolili jsme metodu konečných prvků, která umožňuje převod omezení ve tvaru diferenciálních rovnic na omezení ve tvaru soustavy lineárních rovnic. Reálné problémy jsou často velmi rozsáhlé a přesahují dostupnou výpočetní kapacitu. Výpočetní čas lze snížit pomocí progressive hedging algoritmu (PHA), který umožňuje paralelní implementaci. PHA je efektivní scénářová dekompoziční metoda pro řešení scénářových stochastických úloh. Modifikovaný PHA byl využit pro původní přístup prostorové dekompozice. Aproximace diferenciálních rovnic v modelu problému je dosaženo pomocí diskretizace oblasti. Diskretizace je dále využita pro prostorovou dekompozici modelu. Algoritmus prostorové dekompozice se skládá z několika hlavních kroků: vyřešení problému s hrubou diskretizací, rozdělení oblasti problému do překrývajících se částí a iterační řešení pomocí PHA s jemnější diskretizací s využitím hodnot z hrubé diskretizace jako okrajových podmínek. Prostorová dekompozice byla aplikována na základní testovací problém z oboru stavebního inženýrství, který se zabývá návrhem rozměrů průřezu nosníku. Algoritmus byl implementován v softwaru GAMS. Získané výsledky jsou zhodnoceny vzhledem k výpočetní náročnosti a délce překrytí.Wide variety of optimum design problems in engineering leads to optimization models constrained by ordinary or partial differential equations (ODE or PDE). Numerical methods based on discretising domain are required to obtain a non-differential numerical description of the differential parts of constraints because the analytical solutions can be found only for simple problems. We chose the finite element method. The real problems are often large-scale and exceed computational capacity. Hence, we employ the progressive hedging algorithm (PHA) - an efficient scenario decomposition method for solving scenario-based stochastic programs, which can be implemented in parallel to reduce the computing time. A modified PHA was used for an original concept of spatial decomposition based on the mesh created for approximation of differential equation constraints. The algorithm consists of a few main steps: solve our problem with a raw discretization, decompose it into overlapping parts of the domain, and solve it again iteratively by the PHA with a finer discretization - using values from the raw discretization as boundary conditions until a given accuracy is reached. The spatial decomposition is applied to a basic test problem from the civil engineering area: design of beam cross section dimensions. The algorithms are implemented in GAMS software and finally results are evaluated with respect to a computational complexity and a length of overlap.

    Nonlinear Programming Approaches for Efficient Large-Scale Parameter Estimation with Applications in Epidemiology

    Get PDF
    The development of infectious disease models remains important to provide scientists with tools to better understand disease dynamics and develop more effective control strategies. In this work we focus on the estimation of seasonally varying transmission parameters in infectious disease models from real measles case data. We formulate both discrete-time and continuous-time models and discussed the benefits and shortcomings of both types of models. Additionally, this work demonstrates the flexibility inherent in large-scale nonlinear programming techniques and the ability of these techniques to efficiently estimate transmission parameters even in very large-scale problems. This computational efficiency and flexibility opens the door for investigating many alternative model formulations and encourages use of these techniques for estimation of larger, more complex models like those with age-dependent dynamics, more complex compartment models, and spatially distributed data. However, the size of these problems can become excessively large even for these powerful estimation techniques, and parallel estimation strategies must be explored. Two parallel decomposition approaches are presented that exploited scenario based decomposition and decomposition in time. These approaches show promise for certain types of estimation problems

    Sampling-Based Algorithms for Two-Stage Stochastic Programs and Applications

    Get PDF
    In this dissertation, we present novel sampling-based algorithms for solving two-stage stochastic programming problems. Sampling-based methods provide an efficient approach to solving large-scale stochastic programs where uncertainty is possibly defined on continuous support. When sampling-based methods are employed, the process is usually viewed in two steps - sampling and optimization. When these two steps are performed in sequence, the overall process can be computationally very expensive. In this dissertation, we utilize the framework of internal-sampling where sampling and optimization steps are performed concurrently. The dissertation comprises of two parts. In the first part, we design a new sampling technique for solving two-stage stochastic linear programs with continuous recourse. We incorporate this technique within an internal-sampling framework of stochastic decomposition. In the second part of the dissertation, we design an internal-sampling-based algorithm for solving two-stage stochastic mixed-integer programs with continuous recourse. We design a new stochastic branch-and-cut procedure for solving this class of optimization problems. Finally, we show the efficiency of this method for solving large-scale practical problems arising in logistics and finance

    Multistage quadratic stochastic programming

    Full text link
    Multistage stochastic programming is an important tool in medium to long term planning where there are uncertainties in the data. In this thesis, we consider a special case of multistage stochastic programming in which each subprogram is a convex quadratic program. The results are also applicable if the quadratic objectives are replaced by convex piecewise quadratic functions. Convex piecewise quadratic functions have important application in financial planning problems as they can be used as very flexible risk measures. The stochastic programming problems can be used as multi-period portfolio planning problems tailored to the need of individual investors. Using techniques from convex analysis and sensitivity analysis, we show that each subproblem of a multistage quadratic stochastic program is a polyhedral piecewise quadratic program with convex Lipschitz objective. The objective of any subproblem is differentiable with Lipschitz gradient if all its descendent problems have unique dual variables, which can be guaranteed if the linear independence constraint qualification is satisfied. Expression for arbitrary elements of the subdifferential and generalized Hessian at a point can be calculated for quadratic pieces that are active at the point. Generalized Newton methods with linesearch are proposed for solving multistage quadratic stochastic programs. The algorithms converge globally. If the piecewise quadratic objective is differentiable and strictly convex at the solution, then convergence is also finite. A generalized Newton algorithm is implemented in Matlab. Numerical experiments have been carried out to demonstrate its effectiveness. The algorithm is tested on random data with 3, 4 and 5 stages with a maximum of 315 scenarios. The algorithm has also been successfully applied to two sets of test data from a capacity expansion problem and a portfolio management problem. Various strategies have been implemented to improve the efficiency of the proposed algorithm. We experimented with trust region methods with different parameters, using an advanced solution from a smaller version of the original problem and sorting the stochastic right hand sides to encourage faster convergence. The numerical results show that the proposed generalized Newton method is a highly accurate and effective method for multistage quadratic stochastic programs. For problems with the same number of stages, solution times increase linearly with the number of scenarios

    Modèles d’optimisation stochastique pour le problème de gestion de réservoirs

    Get PDF
    RÉSUMÉ : La gestion d’un système hydroélectrique représente un problème d’une grande complexité pour des compagnies comme Hydro-Québec ou Rio Tinto. Il faut effectivement faire un compromis entre plusieurs objectifs comme la sécurité des riverains, la production hydroélectrique, l’irrigation et les besoins de navigation et de villégiature. Les opérateurs doivent également prendre en compte la topologie du terrain, les délais d’écoulement, les interdépendances entre les réservoirs ainsi que plusieurs phénomènes non linéaires physiques. Même dans un cadre déterministe, ces nombreuses contraintes opérationnelles peuvent mener à des problèmes irréalisables sous certaines conditions hydrologiques. Par ailleurs, la considération de la production hydroélectrique complique considérablement la gestion du bassin versant. Une modélisation réaliste nécessite notamment de prendre en compte la hauteur de chute variable aux centrales, ce qui mène à un problème non convexe. En outre, de nombreuses sources d’incertitude entourent la réalisation d’un plan de production. Les prix de l’électricité sur les marchés internationaux, la disponibilité des turbines, la charge/demande du réseau ainsi que les apports en eau sont tous incertains au moment d’établir les soutirages et les déversés pour un horizon temporel donné. Négliger cette incertitude et supposer une connaissance parfaite du futur peut mener à des politiques de gestion beaucoup trop ambitieuses. Ces dernières ont tendance à engendrer des conséquences désastreuses comme le vidage ou le remplissage très rapide des réservoirs, ce qui conduit ensuite à des inondations ou des sécheresses importantes. Cette thèse considère le problème de gestion de réservoirs avec incertitude sur les apports. Elle tente spécifiquement de développer des modèles et des algorithmes permettant d’améliorer la gestion mensuelle de la rivière Gatineau, notamment en période de crue. Dans cette situation, il est primordial de considérer l’incertitude autour des apports, car ces derniers ont une influence marquée sur l’état hydrologique du système en plus d’être la cause d’évènements désastreux comme les inondations. La gestion des inondations est particulièrement importante pour la Gatineau, car la rivière coule près de la ville de Maniwaki qui a déjà vécu des inondations dans le passé et continue de présenter des risques importants. Cette rivière représente également une excellente étude de cas, car elle possède plusieurs barrages et réservoirs. La grande dimension du système rend difficile l’application de certains algorithmes populaires comme la programmation dynamique stochastique. Afin de minimiser le risque d’inondations, on propose initialement un modèle de programmation stochastique multi-étapes (multi-stage stochastic program) basé sur les règles de décision affine et les règles de décision affines liftées. On considère l’aversion au risque en évaluant la valeur à risque conditionnelle (conditional value-at-risk) aussi connue comme "CVaR". Ce travail considère une représen-tation polyhédrale de l’incertitude très simple basée sur la moyenne et la variance d’échantillon. Le deuxième article propose d’améliorer cette Représentation de l’incertitude en considérant explicitement la corrélation temporelle entre les apports. À cet effet, il introduit les modèles de séries chronologiques de type ARIMA et présente une manière de les incorporer efficacement dans un modèle multi-étapes avec règles de décision. On étend ensuite l’approche pour évaluer les processus GARCH, ce qui permet d’incorporer l’hétéroscédasticité. Le troisième travail raffine la représentation de l’incertitude utilisée dans le deuxième travail en s’appuyant sur un modèle ARMA calibré sur le logarithme des apports. Cette représentation non linéaire mène à un ensemble d’incertitude non convexe qu’on choisit d’approximer de façon conservatrice par un polyhèdre. Ce modèle offre néanmoins plusieurs avantages comme la possibilité de dériver une expression analytique pour l’espérance conditionnelle. Afin de considérer la hauteur de chute variable, on propose un algorithme de région de confiance très simple, mais efficace. Ces travaux montrent qu’il est possible d’obtenir de bons résultats pour le problème de gestion de réservoir en considérant les règles de décision linéaires en combinaison avec une représentation basée sur les processus ARIMA.----------ABSTRACT : The problem of designing an optimal release schedule for a hydroelectric system is extremely challenging for companies like Rio Tinto and Hydro-Québec. It is essential to strike an adequate compromise between various conflicting objectives such a riparian security, hydroelectric production as well as navigation and irrigation needs. Operators must also consider the topology of the terrain, water delays, dependence between reservoirs as well as non-linear physical phenomena. Even in a deterministic framework, it may be impossible to find a feasible solution under given hydrological conditions. Considering hydro-electricity generation further complicates the problem. Indeed, a realistic model must take into account variable water head, which leads to an intractable bilinear non-convex problem. In addition, there exists various sources of uncertainty surrounding the elaboration of the production plan. The price of electricity on foreign markets, availability of turbines, load of the network and water inflows all remain uncertain at the time of fixing water releases and spills over the given planning horizon. Neglecting this uncertainty and assuming perfect foresight will lead to overly ambitious policies. These decisions will in turn generate disastrous consequences such as very rapid emptying or filling of reservoirs, which in turn generate droughts or floods. This thesis considers the reservoir management problem with uncertain inflows. It aims at developing models and algorithms to improve the management of the Gatineau river, namely during the freshet. In this situation, it is essential to consider the randomness of inflows since these drive the dynamics of the systems and can lead to disastrous consequences like floods. Flood management is particularly important for the Gatineau, since the river runs near the town of Maniwaki, which has witnessed several floods in the past. This river also represents a good case study because it comprises various reservoirs and dams. This multi-dimensionality makes it difficult to apply popular algorithms such as stochastic dynamic programming. In order to minimize the risk of floods, we initially propose a multi-stage stochastic program based on affine and lifted decision rules. We capture risk aversion by optimizing the conditional value-at-risk also known as "CVaR". This work considers a simple polyhedral uncertainty representation based on the sample mean and variance. The second paper builds on this work by explicitly considering the serial correlation between inflows. In order to do so, it introduces ARIMA time series models and details their incorporation into multi-stage stochastic programs with decision rules. The approach is then extended to take into account heteroscedasticity with GARCH models The third work further refines the uncertainty representation by calibrating an ARMA model on the log of inflows. This leads to a non-convex uncertainty set, which is approximated with a simple polyhedron. This model offers various advantages such as increased forecasting skill and ability to derive an analytical expression for the conditional expectation. In order to consider the variable water head, we propose a successive linear programming (SLP) algorithm which quickly yields good solutions. These works illustrate the value of using affine decision rules in conjunction with ARIMA models to obtain good quality solutions to complex multi-stage stochastic problems
    corecore