549 research outputs found

    Regularized Decomposition of High-Dimensional Multistage Stochastic Programs with Markov Uncertainty

    Full text link
    We develop a quadratic regularization approach for the solution of high-dimensional multistage stochastic optimization problems characterized by a potentially large number of time periods/stages (e.g. hundreds), a high-dimensional resource state variable, and a Markov information process. The resulting algorithms are shown to converge to an optimal policy after a finite number of iterations under mild technical assumptions. Computational experiments are conducted using the setting of optimizing energy storage over a large transmission grid, which motivates both the spatial and temporal dimensions of our problem. Our numerical results indicate that the proposed methods exhibit significantly faster convergence than their classical counterparts, with greater gains observed for higher-dimensional problems

    Optimizing the management of multireservoir systems under shifting flow regimes

    Get PDF
    Over the past few decades, significant research efforts have been devoted to the development of tools and techniques to improve the operational effectiveness of multireservoir systems. One of those efforts focuses on the incorporation of relevant hydrologic information into reservoir operation models. This effort is particularly relevant in regions characterized by low-frequency climate signals, where time series of river discharges exhibit regime-like behavior. Failure to properly capture such regime-like behavior yields suboptimal operating policies, especially in systems characterized by large storage capacity such as large multireservoir systems. Hidden Markov Modeling is a class of hydrological models that can accommodate both overdispersion and serial dependence in time series, two essential hydrological properties that must be captured when modeling a system where the climate is switching between different states (e.g., dry, normal, and wet). In terms of reservoir operation, Stochastic Dual Dynamic Programming (SDDP) is one of the few optimization techniques that can accommodate both system and hydrologic complexity, that is, a large number of reservoirs and diverse hydrologic information. However, current SDDP formulations are unable to capture the long-term persistence of the streamflow process found in some regions. In this paper, we present an extension of the SDDP algorithm that can handle the long-term persistence and provide reservoir operating policies that explicitly capture regime shifts. Using the Senegal River Basin as a case study, we illustrate the potential gain associated with reservoir operating policies tailored to climate states

    Benders decomposition method in reservoir management

    Get PDF

    Stochastic programming for hydro-thermal unit commitment

    Get PDF
    In recent years the deregulation of energy markets and expansion of volatile renewable energy supplies has triggered an increased interest in stochastic optimization models for thermal and hydro-thermal scheduling. Several studies have modelled this as stochastic linear or mixed-integer optimization problems. Although a variety of efficient solution techniques have been developed for these models, little is published about the added value of stochastic models over deterministic ones. In the context of day-ahead and intraday unit commitment under wind uncertainty, we compare two-stage and multi-stage stochastic models to deterministic ones and quantify their added value. We show that stochastic optimization models achieve minimal operational cost without having to tune reserve margins in advance, and that their superiority over deterministic models grows with the amount of uncertainty in the relevant wind forecasts. We present a modification of the WILMAR scenario generation technique designed to match the properties of the errors in our wind forcasts, and show that this is needed to make the stochastic approach worthwhile. Our evaluation is done in a rolling horizon fashion over the course of two years, using a 2020 central scheduling model of the British National Grid with transmission constraints and a detailed model of pump storage operation and system-wide reserve and response provision. Solving stochastic problems directly is computationally intractable for large instances, and alternative approaches are required. In this study we use a Dantzig-Wolfe reformulation to decompose the problem by scenarios. We derive and implement a column generation method with dual stabilisation and novel primal and dual initialisation techniques. A fast, novel schedule combination heuristic is used to construct an optimal primal solution, and numerical results show that knowing this solution from the start also improves the convergence of the lower bound in the column generation method significantly. We test this method on instances of our British model and illustrate that convergence to within 0.1% of optimality can be achieved quickly

    Optimising supermarket promotions of fast moving consumer goods using disaggregated sales data: A case study of Tesco and their small and medium sized suppliers

    Get PDF
    The use of price promotions for fast moving consumer goods (FMCG’s) by supermarkets has increased substantially over the last decade, with significant implications for all stakeholders (suppliers, service providers & retailers) in terms of profitability and waste. The overall impact of price promotions depends on the complex interplay of demand and supply side factors, which has received limited attention in the academic literature. There is anecdotal evidence that in many cases, and particularly for products supplied by small and medium sized enterprises (SMEs), price promotions are implemented with limited understanding of these factors, resulting in missed opportunities for sales and the generation of avoidable promotional waste. This is particularly dangerous for SMEs who are often operating with tight margins and limited resources. A better understanding of consumer demand, through the use of disaggregated sales data (by shopper segment and store type) can facilitate more accurate forecasting of promotional uplifts and more effective allocation of stock, to maximise promotional sales and minimise promotional waste. However, there is little evidence that disaggregated data is widely or routinely used by supermarkets or their suppliers, particularly for those products supplied by SMEs. Moreover, the bulk of the published research regarding the impact of price promotions is either focussed on modelling consumer response, using claimed behaviour or highly aggregated scanner data or replenishment processes (frameworks and models) that bear little resemblance to the way in which the majority of food SMEs operate. This thesis explores the scope for improving the planning and execution of supermarket promotions, in the specific context of products supplied by SME, through the use of dis-aggregated sales data to forecast promotional sales and allocate promotional stock. An innovative case study methodology is used combining qualitative research to explore the promotional processes used by SMEs supplying the UK’s largest supermarket, Tesco, and simulation modelling, using supermarket loyalty card data and store level sales data, to estimate short term promotional impacts under different scenarios and derive optimize stock allocations using mixed integer linear programming (MILP). ii The results suggest that promotions are often designed, planned and executed with little formalised analysis or use of dis-aggregated sales data and with limited consideration of the interplay between supply and demand. The simulation modelling and MILP demonstrate the benefits of using supermarket loyalty card data and store level sales data to forecast demand and allocate stocks, through higher promotional uplifts and reduced levels of promotional wast

    The value of hydrological information in multireservoir systems operation

    Get PDF
    La gestion optimale d’un système hydroélectrique composé de plusieurs réservoirs est un problème multi-étapes complexe de prise de décision impliquant, entre autres, (i) un compromis entre les conséquences immédiates et futures d’une décision, (ii) des risques et des incertitudes importantes, et (iii) de multiple objectifs et contraintes opérationnelles. Elle est souvent formulée comme un problème d’optimisation, mais il n’existe pas, à ce jour, de technique de référence même si la programmation dynamique (DP) a été souvent utilisée. La formulation stochastique de DP (SDP) permet la prise en compte explicite de l’incertitude entourant les apports hydrologiques futurs. Différentes approches ont été développées pour incorporer des informations hydrologiques et climatiques autres que les apports. Ces études ont révélé un potentiel d’amélioration des politiques de gestion proposées par les formulations SDP. Cependant, ces formulations sont applicables aux systèmes de petites tailles en raison de la célèbre « malédiction de la dimensionnalité ». La programmation dynamique stochastique duale (SDDP) est une extension de SDP développée dans les années 90. Elle est l’une des rares solutions algorithmiques utilisées pour déterminer les politiques de gestion des systèmes hydroélectriques de grande taille. Dans SDDP, l’incertitude hydrologique est capturée à l’aide d’un modèle autorégressif avec corrélation spatiale des résidus. Ce modèle analytique permet d’obtenir certains des paramètres nécessaires à l’implémentation de la technique d’optimisation. En pratique, les apports hydrologiques peuvent être influencés par d’autres variables observables, telles que l’équivalent de neige en eau et / ou la température de la surface des océans. La prise en compte de ces variables, appelées variables exogènes, permet de mieux décrire les processus hydrologiques et donc d’améliorer les politiques de gestion des réservoirs. L’objectif principal de ce doctorat est d’évaluer la valeur économique des politiques de gestion proposées par SDDP et ce pour diverses informations hydro-climatiques. En partant d’un modèle SDDP dans lequel la modélisation hydrologique est limitée aux processus Makoviens, la première activité de recherche a consisté à augmenter l’ordre du modèle autorégressif et à adapter la formulation SDDP. La seconde activité fut dédiée à l’incorporation de différentes variables hydrologiques exogènes dans l’algorithme SDDP. Le système hydroélectrique de Rio Tinto (RT) situé dans le bassin du fleuve Saguenay-Lac-Saint-Jean fut utilisé comme cas d’étude. Étant donné que ce système n’est pas capable de produire la totalité de l’énergie demandée par les fonderies pour assurer pleinement la production d’aluminium, le modèle SDDP a été modifié de manière à considérer les décisions de gestion des contrats avec Hydro Québec. Le résultat final est un système d’aide à la décision pour la gestion d’un large portefeuille d’actifs physiques et financiers en utilisant diverses informations hydro-climatiques. Les résultats globaux révèlent les gains de production d’énergie auxquels les opérateurs peuvent s’attendre lorsque d’autres variables hydrologiques sont incluses dans le vecteur des variables d’état de SDDP.The optimal operation of a multireservoir hydroelectric system is a complex, multistage, stochastic decision-making problem involving, among others, (i) a trade-off between immediate and future consequences of a decision, (ii) considerable risks and uncertainties, and (iii) multiple objectives and operational constraints. The reservoir operation problem is often formulated as an optimization problem but not a single optimization approach/algorithm exists. Dynamic programming (DP) has been the most popular optimization technique applied to solve the optimization problem. The stochastic formulation of DP (SDP) can be performed by explicitly considering streamflow uncertainty in the DP recursive equation. Different approaches to incorporate more hydrologic and climatic information have been developed and have revealed the potential to enhance SDP- derived policies. However, all these techniques are limited to small-scale systems due to the so-called curse of dimensionality. Stochastic Dual Dynamic Programming (SDDP), an extension of the traditional SDP developed in the 90ies, is one of the few algorithmic solutions used to determine the operating policies of large-scale hydropower systems. In SDDP the hydrologic uncertainty is captured through a multi-site periodic autoregressive model. This analytical linear model is required to derive some of the parameters needed to implement the optimization technique. In practice, reservoir inflows can be affected by other observable variables, such snow water equivalent and/or sea surface temperature. These variables, called exogenous variables, can better describe the hydrologic processes, and therefore enhance reservoir operating policies. The main objective of this PhD is to assess the economic value of SDDP-derived operating policies in large-scale water systems using various hydro-climatic information. The first task focuses on the incorporation of the multi-lag autocorrelation of the hydrologic variables in the SDDP algorithm. Afterwards, the second task is devoted to the incorporation of different exogenous hydrologic variables. The hydroelectric system of Rio Tinto (RT) located in the Saguenay-Lac-Saint-Jean River Basin is used as case study. Since, RT’s hydropower system is not able to produce the entire amount of energy demanded at the smelters to fully assure the aluminum production, a portfolio of energy contacts with Hydro-Québec is available. Eventually, we end up with a decision support system for the management of a large portfolio of physical and financial assets using various hydro-climatic information. The overall results reveal the extent of the gains in energy production that the operators can expect as more hydrologic variables are included in the state-space vector

    Towards More Nuanced Patient Management: Decomposing Readmission Risk with Survival Models

    Get PDF
    Unplanned hospital readmissions are costly and associated with poorer patient outcomes. Overall readmission rates have also come to be used as performance metrics in reimbursement in healthcare policy, further motivating hospitals to identify and manage high-risk patients. Many models predicting readmission risk have been developed to facilitate the equitable measurement of readmission rates and to support hospital decision-makers in prioritising patients for interventions. However, these models consider the overall risk of readmission and are often restricted to a single time point. This work aims to develop the use of survival models to better support hospital decision-makers in managing readmission risk. First, semi-parametric statistical and nonparametric machine learning models are applied to adult patients admitted via the emergency department at Gold Coast University Hospital (n = 46,659) and Robina Hospital (n = 23,976) in Queensland, Australia. Overall model performance is assessed based on discrimination and calibration, as measured by time-dependent concordance and D-calibration. Second, a framework based on iterative hypothesis development and model fitting is proposed for decomposing readmission risk into persistent, patient-specific baselines and transient, care-related components using a sum of exponential hazards structure. Third, criteria for patient prioritisation based on the duration and magnitude of care-related risk components are developed. The extensibility of the framework and subsequent prioritisation criteria are considered for alternative populations, such as outpatient admissions and specific diagnosis groups, and different modelling techniques. Time-to-event models have rarely been applied for readmission modelling but can provide a rich description of the evolution of readmission risk post-discharge and support more nuanced patient management decisions than simple classification models
    • …
    corecore