48 research outputs found

    Water-related modelling in electric power systems: WATERFLEX Exploratory Research Project: version 1

    Get PDF
    Water is needed for energy. For instance, hydropower is the technology that generates more electricity worldwide after the fossil-fuelled power plants and its production depends on water availability and variability. Additionally, thermal power plants need water for cooling and thus generate electricity. On the other hand, energy is also needed for water. Given the increase of additional hydropower potential worldwide in the coming years, the high dependence of electricity generation with fossil-fuelled power plants, and the implications of the climate change, relevant international organisations have paid attention to the water-energy nexus (or more explicitly within a power system context, the water-power nexus). The Joint Research Centre of the European Commission, the United States Department of Energy, the Institute for Advanced Sustainability Studies, the Midwest Energy Research Consortium and the Water Council, or the Organisation for Economic Co-operation and Development, among others, have raised awareness about this nexus and its analysis as an integrated system. In order to properly analyse such linkages between the power and water sectors, there is a need for appropriate modelling frameworks and mathematical approaches. This report comprises the water-constrained models in electric power systems developed within the WATERFLEX Exploratory Research Project of the European Commission’s Joint Research Centre in order to analyse the water-power interactions. All these models are deemed modules of the Dispa-SET modelling tool. The version 1 of the medium-term hydrothermal coordination module is presented with some modelling extensions, namely the incorporation of transmission network constraints, water demands, and ecological flows. Another salient feature of this version of Dispa-SET is the modelling of the stochastic medium-term hydrothermal coordination problem. The stochastic problem is solved by using an efficient scenario-based decomposition technique, the so-called Progressive Hedging algorithm. This technique is an Augmented-Lagrangian-based decomposition method that decomposes the original problem into smaller subproblems per scenario. The Progressive Hedging algorithm has multiple advantages: — It is easy parallelizable due to its inherent structure. — It provides solution stability and better computational performance compared to Benders-like decomposition techniques (node-based decomposition). — It scales better for large-scale stochastic programming problems. — It has been widely used in the technical literature, thus demonstrating its efficiency. Its implementation has been carried out through the PySP software package which is part of the Coopr open-source Python repository for optimisation. This report also describes the cooling-related constraints included in the unit commitment and dispatch module of Dispa-SET. The cooling-related constraints encompass limitations on allowable maximum water withdrawals of thermal power plants and modelling of the power produced in terms of the river water temperature of the power plant inlet. Limitations on thermal releases or water withdrawals could be imposed due to physical or policy reasons. Finally, an offline and decoupled modelling framework is presented to link such modules with the rainfall-runoff hydrological LISFLOOD model. This modelling framework is able to accurately capture the water-power interactions. Some challenges and barriers to properly address the water-power nexus are also highlighted in the report.JRC.C.7-Knowledge for the Energy Unio

    Modeling and Solving Large-scale Stochastic Mixed-Integer Problems in Transportation and Power Systems

    Get PDF
    In this dissertation, various optimization problems from the area of transportation and power systems will be respectively investigated and the uncertainty will be considered in each problem. Specifically, a long-term problem of electricity infrastructure investment is studied to address the planning for capacity expansion in electrical power systems with the integration of short-term operations. The future investment costs and real-time customer demands cannot be perfectly forecasted and thus are considered to be random. Another maintenance scheduling problem is studied for power systems, particularly for natural gas fueled power plants, taking into account gas contracting and the opportunity of purchasing and selling gas in the spot market as well as the maintenance scheduling considering the uncertainty of electricity and gas prices in the spot market. In addition, different vehicle routing problems are researched seeking the route for each vehicle so that the total traveling cost is minimized subject to the constraints and uncertain parameters in corresponding transportation systems. The investigation of each problem in this dissertation mainly consists of two parts, i.e., the formulation of its mathematical model and the development of solution algorithm for solving the model. The stochastic programming is applied as the framework to model each problem and address the uncertainty, while the approach of dealing with the randomness varies in terms of the relationships between the uncertain elements and objective functions or constraints. All the problems will be modeled as stochastic mixed-integer programs, and the huge numbers of involved decision variables and constraints make each problem large-scale and very difficult to manage. In this dissertation, efficient algorithms are developed for these problems in the context of advanced methodologies of optimization and operations research, such as branch and cut, benders decomposition, column generation and Lagrangian method. Computational experiments are implemented for each problem and the results will be present and discussed. The research carried out in this dissertation would be beneficial to both researchers and practitioners seeking to model and solve similar optimization problems in transportation and power systems when uncertainty is involved

    Large-scale unit commitment under uncertainty: an updated literature survey

    Get PDF
    The Unit Commitment problem in energy management aims at finding the optimal production schedule of a set of generation units, while meeting various system-wide constraints. It has always been a large-scale, non-convex, difficult problem, especially in view of the fact that, due to operational requirements, it has to be solved in an unreasonably small time for its size. Recently, growing renewable energy shares have strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex and uncertain (stochastic, robust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focussing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, while providing entry points to the relevant literature on optimization under uncertainty. This is an updated version of the paper "Large-scale Unit Commitment under uncertainty: a literature survey" that appeared in 4OR 13(2), 115--171 (2015); this version has over 170 more citations, most of which appeared in the last three years, proving how fast the literature on uncertain Unit Commitment evolves, and therefore the interest in this subject

    Importance sampling for stochastic programming

    Get PDF
    Stochastic programming models are large-scale optimization problems that are used to facilitate decision-making under uncertainty. Optimization algorithms for such problems need to evaluate the exptected future costs of current decisions, often referred to as the recourse function. In practice, this calculation is computationally difficult as it requires the evaluation of a multidimensional integral whose integrand is an optimization problem. In turn, the recourse function has to be estimated using techniques such as scenario trees or Monte Carlo methods, both of which require numerous function evaluations to produce accurate results for large-scale problems with multiple periods and high-dimensional uncertainty. In this thesis, we introduce an Importance Sampling framework for stochastic programming that can produce accurate estimates of the recourse function using a small number of samples. Previous approaches for importance sampling in stochastic programming were limited to problems where the uncertainty was modelled using discrete random variables, and the recourse function was additively separable in the uncertain dimensions. Our framework avoids these restrictions by pairing Markov Chain Monte Carlo methods with Kernel Density Estimation algorithms to build a non-parametric Importance Sampling distribution, which can then be used to produce a low-variance estimate of the recourse function. We demonstrate the increased accuracy and efficiency of our approach using variants of well-known multistage stochastic programming problems. Our numerical results show that our framework produces more accurate estimates of the optimal value of stochastic programming models, especially for problems with moderate-to-high variance distributions or rare-event distributions. For example, in some applications, we found that if the random variables are drawn from a rare-event distribution, our proposed algorithm can achieve four times reduction in the mean square error and variance given by other existing methods (e.g.: SDDP with Crude Monte Carlo or SDDP with Quasi Monte Carlo method) for the same number of samples. Or when the random variables are drawn from the high variance distribution, our proposed algorithm can reduce the variance averagely by two times compared to the results obtained by other methods for approximately the same level of mean square error and a fixed number of samples. We use our proposed algorithm to solve a capacity expansion planning problem in the electric power industry. The model includes the unit commitment problem and maintenance scheduling. It allows the investors to make optimal decisions on the capacity and the type of generators to build in order to minimize the capital cost and operating cost over a long period of time. Our model computes the optimal schedule for each of the generators while meeting the demand and respecting the engineering constraints of each generator. We use an aggregation method to group generators of similar features, in order to reduce the problem size. The numerical experiment shows that by clustering the generators of the same technology with similar size together and apply the SDDP algorithm with our proposed sampling framework on this simplified formulation, we are able to solve the problem using only one fourth the amount of time to solve the original problem by conventional algorithms. The speed-up is achieved without a significant reduction in the quality of the solution.Open Acces

    Importance sampling in stochastic programming: A Markov chain Monte Carlo approach

    No full text

    Multistage quadratic stochastic programming

    Full text link
    Multistage stochastic programming is an important tool in medium to long term planning where there are uncertainties in the data. In this thesis, we consider a special case of multistage stochastic programming in which each subprogram is a convex quadratic program. The results are also applicable if the quadratic objectives are replaced by convex piecewise quadratic functions. Convex piecewise quadratic functions have important application in financial planning problems as they can be used as very flexible risk measures. The stochastic programming problems can be used as multi-period portfolio planning problems tailored to the need of individual investors. Using techniques from convex analysis and sensitivity analysis, we show that each subproblem of a multistage quadratic stochastic program is a polyhedral piecewise quadratic program with convex Lipschitz objective. The objective of any subproblem is differentiable with Lipschitz gradient if all its descendent problems have unique dual variables, which can be guaranteed if the linear independence constraint qualification is satisfied. Expression for arbitrary elements of the subdifferential and generalized Hessian at a point can be calculated for quadratic pieces that are active at the point. Generalized Newton methods with linesearch are proposed for solving multistage quadratic stochastic programs. The algorithms converge globally. If the piecewise quadratic objective is differentiable and strictly convex at the solution, then convergence is also finite. A generalized Newton algorithm is implemented in Matlab. Numerical experiments have been carried out to demonstrate its effectiveness. The algorithm is tested on random data with 3, 4 and 5 stages with a maximum of 315 scenarios. The algorithm has also been successfully applied to two sets of test data from a capacity expansion problem and a portfolio management problem. Various strategies have been implemented to improve the efficiency of the proposed algorithm. We experimented with trust region methods with different parameters, using an advanced solution from a smaller version of the original problem and sorting the stochastic right hand sides to encourage faster convergence. The numerical results show that the proposed generalized Newton method is a highly accurate and effective method for multistage quadratic stochastic programs. For problems with the same number of stages, solution times increase linearly with the number of scenarios

    An Analytics Approach To Designing Patient Centered Medical Home

    Get PDF
    Recently the patient centered medical home (PCMH) model has become a popular team based approach focused on delivering more streamlined care to patients. In current practices of medical homes, a clinical based prediction frame is recommended because it can help match the portfolio capacity of PCMH teams with the actual load generated by a set of patients. Without such balances in clinical supply and demand, issues such as excessive under and over utilization of physicians, long waiting time for receiving the appropriate treatment, and non continuity of care will eliminate many advantages of the medical home strategy. In this research, we formulate the problem into two phases. At the first phase we proposed a multivariate version of multilevel structured additive regression (STAR) models which involves a set of health care responses defined at the lowest level of the hierarchy, a set of patient factors to account for individual heterogeneity, and a set of higher level effects to capture heterogeneity and dependence between patients within the same medical home team and facility. We show how a special class of such models can equivalently be represented and estimated in a structural equation-modeling framework. A Bayesian variable selection with spike and slab prior structure is then developed that allows including or dropping single effects as well as grouped coefficients representing particular model terms. We use a simple parameter expansion to improve mixing and convergence properties of Markov chain Monte Carlo simulation. A detailed analysis of the VHA medical home data is presented to demonstrate the performance and applicability of our method. In addition, by extending the hierarchical generalized linear model to include multivariate responses, we develop a clinical workload prediction model for care portfolio demands in a Bayesian framework. The model allows for heterogeneous variances and unstructured covariance matrices for nested random effects that arise through complex hierarchical care systems. We show that using a multivariate approach substantially enhances the precision of workload predictions at both primary and non primary care levels. We also demonstrate that care demands depend not only on patient demographics but also on other utilization factors, such as length of stay. Our analyses of a recent data from Veteran Health Administration further indicate that risk adjustment for patient health conditions can considerably improve the prediction power of the model. For the second phase, with the help of the model developed in first phase, we are able to estimate the annual workload demand portfolio for each patient with given attributes. Together with the healthcare service supply data, and based on the principles of balancing supply and demand, we developed stochastic optimization models to allocate patients to all PCMH teams in order to make balance between supply and demand in healthcare system. We proposed different stochastic models and two solution approaches such as Progressive Hedging and L shaped Benders Decomposition. We described the application of the two mentioned algorithms and finally we compared the performance of the two methods

    Decomposition techniques for large scale stochastic linear programs

    Get PDF
    Stochastic linear programming is an effective and often used technique for incorporating uncertainties about future events into decision making processes. Stochastic linear programs tend to be significantly larger than other types of linear programs and generally require sophisticated decomposition solution procedures. Detailed algorithms based uponDantzig-Wolfe and L-Shaped decomposition are developed and implemented. These algorithms allow for solutions to within an arbitrary tolerance on the gap between the lower and upper bounds on a problem\u27s objective function value. Special procedures and implementation strategies are presented that enable many multi-period stochastic linear programs to be solved with two-stage, instead of nested, decomposition techniques. Consequently, abroad class of large scale problems, with tens of millions of constraints and variables, can be solved on a personal computer. Myopic decomposition algorithms based upon a shortsighted view of the future are also developed. Although unable to guarantee an arbitrary solution tolerance, myopic decomposition algorithms may yield very good solutions in a fraction of the time required by Dantzig-Wolfe/L-Shaped decomposition based algorithms.In addition, derivations are given for statistics, based upon Mahalanobis squared distances,that can be used to provide measures for a random sample\u27s effectiveness in approximating a parent distribution. Results and analyses are provided for the applications of the decomposition procedures and sample effectiveness measures to a multi-period market investment model

    A stochastic minimum principle and an adaptive pathwise algorithm for stochastic optimal control

    Get PDF
    We present a numerical method for finite-horizon stochastic optimal control models. We derive a stochastic minimum principle (SMP) and then develop a numerical method based on the direct solution of the SMP. The method combines Monte Carlo pathwise simulation and non-parametric interpolation methods. We present results from a standard linear quadratic control model, and a realistic case study that captures the stochastic dynamics of intermittent power generation in the context of optimal economic dispatch models.National Science Foundation (U.S.) (Grant 1128147)United States. Dept. of Energy. Office of Science (Biological and Environmental Research Program Grant DE-SC0005171)United States. Dept. of Energy. Office of Science (Biological and Environmental Research Program Grant DE-SC0003906
    corecore