68 research outputs found

    Heterogeneous Unit Clustering for Efficient Operational Flexibility Modeling for Strategic Models

    Get PDF
    The increasing penetration of wind generation has led to significant improvements in unit commitment models. However, long-term capacity planning methods have not been similarly modified to address the challenges of a system with a large fraction of generation from variable sources. Designing future capacity mixes with adequate flexibility requires an embedded approximation of the unit commitment problem to capture operating constraints. Here we propose a method, based on clustering units, for a simplified unit commitment model with dramatic improvements in solution time that enable its use as a submodel within a capacity expansion framework. Heterogeneous clustering speeds computation by aggregating similar but non-identical units thereby replacing large numbers of binary commitment variables with fewer integers that still capture individual unit decisions and constraints. We demonstrate the trade-off between accuracy and run-time for different levels of aggregation. A numeric example using an ERCOT-based 205-unit system illustrates that careful aggregation introduces errors of 0.05-0.9% across several metrics while providing several orders of magnitude faster solution times (400x) compared to traditional binary formulations and further aggregation increases errors slightly (~2x) with further speedup (2000x). We also compare other simplifications that can provide an additional order of magnitude speed-up for some problems

    Optimal Selection of Sample Weeks for Approximating the Net Load in Generation Planning Problems

    Get PDF
    The increasing presence of variable energy resources (VER) in power systems –most notably wind and solar power– demands tools capable of evaluating the flexibility needs to compensate for the resulting variability in the system. Capacity expansion models are needed that embed unit commitment decisions and constraints to account for the interaction between hourly variability and realistic operating constraints. However, the dimensionality of this problem grows proportionally with the time horizon of the load profile used to characterize the system, requiring massive amounts of computing resources. One possible solution to overcome this computational problem is to select a small number of representative weeks, but there is no consistent criterion to select these weeks, or to assess the validity of the approximation. This paper proposes a methodology to optimally select a given number of representative weeks that jointly characterize demand and VER output for capacity planning models aimed at evaluating flexibility needs. It also presents different measures to assess the error between the approximation and the complete time series. Finally, it demonstrates that the proposed methodology yields a valid approximation for unit commitment constraints embedded in long-term planning models

    Airport Congestion Mitigation through Dynamic Control of Runway Configurations and of Arrival and Departure Service Rates under Stochastic Operating Conditions

    Get PDF
    The high levels of flight delays require the implementation of airport congestion mitigation tools. In this paper, we optimize the utilization of airport capacity at the tactical level in the face of operational uncertainty. We formulate an original Dynamic Programming model that selects jointly and dynamically runway configurations and the balance of arrival and departure service rates at a busy airport to minimize congestion costs, under stochastic queue dynamics and stochastic operating conditions. The control is exercised as a function of flight schedules, of arrival and departure queue lengths and of weather and wind conditions. We implement the model in a realistic setting at JFK Airport. The exact Dynamic Programming algorithm terminates within reasonable time frames. In addition, we implement an approximate one-step look-ahead algorithm that considerably accelerates the execution of the model and results in close-to-optimal policies. In combination, these solution algorithms enable the on-line implementation of the model using real-time information on flight schedules and meteorological conditions. The application of the model shows that the optimal policy is path-dependent, i.e., it depends on prior decisions and on the stochastic evolution of arrival and departure queues during the day. This underscores the theoretical and practical need for integrating operating stochasticity into the decision-making framework. From comparisons with an alternative model based on deterministic queue dynamics, we estimate the benefit of considering queue stochasticity at 5% to 20%. Finally, comparisons with advanced heuristics aimed to imitate actual operating procedures suggest that the model can yield significant cost savings, estimated at 20% to 30%

    Scenario analysis of carbon capture and sequestration generation dispatch in the western U.S. electricity system

    Get PDF
    AbstractWe present an analysis of the feasibility of dispatch of coal-fired generation with carbon capture and sequestration (CCS) as a function of location. Dispatch es for locations are studied with regard to varying carbon dioxide (CO2) prices, demand load levels, and natural gas prices. Using scenarios with a carbon price range of 0to 0 to 100 per ton - CO2, we show that a hypothetical CCS generator would be dispatched on a marginal cost basis given a high enough carbon price but that the minimum carbon price required for dispatch varies widely by location and system demand

    Rail Infrastructure Manager Problem: Analyzing Capacity Pricing and Allocation in Shared Railway System

    Get PDF
    This paper proposes a train timetabling model for shared railway systems. The model is formulated as a mixed integer linear programming problem and solved both using commercial software and a novel algorithm based on approximate dynamic programming. The results of the train timetabling model can be used to simulate and evaluate the behavior of the infrastructure manager in shared railway systems under different capacity pricing and allocation mechanisms. This would allow regulators and decision makers to identify the implications of these mechanisms for different stakeholders considering the specific characteristics of the system

    Relative Roles of Climate Sensitivity and Forcing in Defining the Ocean Circulation Response to Climate Change

    Get PDF
    Abstract in HTML and technical report in PDF available on the Massachusetts Institute of Technology Joint Program on the Science and Policy of Global Change website (http://mit.edu/globalchange/www/).The response of the ocean’s meridional overturning circulation (MOC) to increased greenhouse gas forcing is examined using a coupled model of intermediate complexity, including a dynamic 3D ocean subcomponent. Parameters are the increase in CO2 forcing (with stabilization after a specified time interval) and the model’s climate sensitivity. In this model, the cessation of deep sinking in the north “Atlantic” (hereinafter, a “collapse”), as indicated by changes in the MOC, behaves like a simple bifurcation. The final surface air temperature (SAT) change, which is closely predicted by the product of the radiative forcing and the climate sensitivity, determines whether a collapse occurs. The initial transient response in SAT is largely a function of the forcing increase, with higher sensitivity runs exhibiting delayed behavior; accordingly, high CO2-low sensitivity scenarios can be assessed as a recovering or collapsing circulation shortly after stabilization, whereas low CO2-high sensitivity scenarios require several hundred additional years to make such a determination. We also systemically examine how the rate of forcing, for a given CO2 stabilization, affects the ocean response. In contrast with previous studies based on results using simpler ocean models, we find that except for a narrow range of marginally stable to marginally unstable scenarios, the forcing rate has little impact on whether the run collapses or recovers. In this narrow range, however, forcing increases on a time scale of slow ocean advective processes results in weaker declines in overturning strength and can permit a run to recover that would otherwise collapse.This research was supported in part by the Methods and Models for Integrated Assessments Program of the National Science Foundation, Grant ATM-9909139, by the Office of Science (BER), U.S. Department of Energy, Grant No. DE-FG02-93ER61677, and by the MIT Joint Program on the Science and Policy of Global Change (JPSPGC)

    Uncertainty in Greenhouse Emissions and Costs of Atmospheric Stabilization

    Get PDF
    Abstract and PDF report are also available on the MIT Joint Program on the Science and Policy of Global Change website (http://globalchange.mit.edu/).We explore the uncertainty in projections of emissions, and costs of atmospheric stabilization applying the MIT Emissions Prediction and Policy Analysis model, a computable general equilibrium model of the global economy. Monte Carlo simulation with Latin Hypercube Sampling is applied to draw 400 samples from probability distributions for 100 parameters in the EPPA model, including labor productivity growth rates, energy efficiency trends, elasticities of substitution, costs of advanced technologies, fossil fuel resource availability, and trends in emissions factors for urban pollutants. The resulting uncertainty in emissions and global costs is explored under a scenario assuming no climate policy and four different targets for stabilization of atmospheric greenhouse gas concentrations. We find that most of the IPCC emissions scenarios are outside the 90% probability range of emissions in the absence of climate policy, and are consistent with atmospheric stabilization scenarios. We find considerable uncertainty in the emissions prices under stabilization. For example, the CO2 price in 2060 under an emissions constraint targeted to achieve stabilization at 650 ppm has a 90% range of 14to14 to 88 per ton CO2, and a 450 ppm target in 2060 has a range of 241to241 to 758. We also explore the relative contribution of uncertainty in different parameters to the resulting uncertainty in emissions and costs and find that, despite the significant uncertainty in future energy supply technologies, the largest drivers of uncertainty in costs of atmospheric stabilization are energy demand parameters, including elasticities of substitution and energy efficiency trends.The authors gratefully acknowledge the financial support for this work provided by the MIT Joint Program on the Science and Policy of Global Change through a consortium of industrial sponsors and Federal grants

    Analysis of Climate Policy Targets under Uncertainty

    Get PDF
    Abstract and PDF report are also available on the MIT Joint Program on the Science and Policy of Global Change website (http://globalchange.mit.edu/).Although policymaking in response to the climate change is essentially a challenge of risk management, most studies of the relation of emissions targets to desired climate outcomes are either deterministic or subject to a limited representation of the underlying uncertainties. Monte Carlo simulation, applied to the MIT Integrated Global System Model (an integrated economic and earth system model of intermediate complexity), is used to analyze the uncertain outcomes that flow from a set of century-scale emissions targets developed originally for a study by the U.S. Climate Change Science Program. Results are shown for atmospheric concentrations, radiative forcing, sea ice cover and temperature change, along with estimates of the odds of achieving particular target levels, and for the global costs of the associated mitigation policy. Comparison with other studies of climate targets are presented as evidence of the value, in understanding the climate challenge, of more complete analysis of uncertainties in human emissions and climate system response.This study received support from the MIT Joint Program on the Science and Policy of Global Change, which is funded by a consortium of government, industry and foundation sponsors

    Probabilistic Forecast for 21st Century Climate Based on Uncertainties in Emissions (without Policy) and Climate Parameters

    Get PDF
    Abstract and PDF report are also available on the MIT Joint Program on the Science and Policy of Global Change website (http://globalchange.mit.edu/).The MIT Integrated Global System Model is used to make probabilistic projections of climate change from 1861 to 2100. Since the model's first projections were published in 2003 substantial improvements have been made to the model and improved estimates of the probability distributions of uncertain input parameters have become available. The new projections are considerably warmer than the 2003 projections, e.g., the median surface warming in 2091 to 2100 is 5.1°C compared to 2.4°C in the earlier study. Many changes contribute to the stronger warming; among the more important ones are taking into account the cooling in the second half of the 20th century due to volcanic eruptions for input parameter estimation and a more sophisticated method for projecting GDP growth which eliminated many low emission scenarios. However, if recently published data, suggesting stronger 20th century ocean warming, are used to determine the input climate parameters, the median projected warning at the end of the 21st century is only 4.1°C. Nevertheless all our simulations have a very small probability of warming less than 2.4°C, the lower bound of the IPCC AR4 projected likely range for the A1FI scenario, which has forcing very similar to our median projection. The probability distribution for the surface warming produced by our analysis is more symmetric than the distribution assumed by the IPCC due to a different feedback between the climate and the carbon cycle, resulting from a different treatment of the carbon-nitrogen interaction in the terrestrial ecosystem.his work was supported in part by the Office of Science (BER), U.S. Department of Energy Grant No. DE-FG02-93ER61677, NSF, and by the MIT Joint Program on the Science and Policy of Global Change

    Changing the spatial location of electricity generation to increase water availability in areas with drought: a feasibility study and quantification of air quality impacts in Texas

    Get PDF
    The feasibility, cost, and air quality impacts of using electrical grids to shift water use from drought-stricken regions to areas with more water availability were examined. Power plant cooling represents a large portion of freshwater withdrawals in the United States, and shifting where electricity generation occurs can allow the grid to act as a virtual water pipeline, increasing water availability in regions with drought by reducing water consumption and withdrawals for power generation. During a 2006 drought, shifting electricity generation out of the most impacted areas of South Texas (~10% of base case generation) to other parts of the grid would have been feasible using transmission and power generation available at the time, and some areas would experience changes in air quality. Although expensive, drought-based electricity dispatch is a potential parallel strategy that can be faster to implement than other infrastructure changes, such as air cooling or water pipelines.National Science Foundation (U.S.). Office of Emerging Frontiers in Research and Innovation (Grant 0835414)United States. Dept. of Energ
    corecore