27,814 research outputs found
Strontium isotope stratigraphy in the Late Cretaceous: Numerical calibration of the Sr isotope curve and intercontinental correlation for the campanian
The white Chalk exposed in quarries at Lagerdorf and Kronsmoor, northwestern Germany, provides a standard section for the European Upper Cretaceous. The Sr-87/Sr-86 values of nannofossil chalk and belemnite calcite increase upward through 330 m of section, from less than or equal to 0.70746 in the Upper Santonian to greater than or equal to 0.70777 in the Lower Maastrichtian. The data define three linear trends separated by major points of inflection at stratigraphic heights in the section of 162 m (75.5 Ma) in the Upper Campanian Galerites vulgaris zone and at -6 m (82.9 Ma), just above the base of the Campanian in the Inoceramus lingua/Goniateuthis quadrata zone. The temporal rate of change of Sr-87/Sr-86 was constant through each of the linear segments of our isotope ''curve'' when viewed at the resolution of our average sampling interval (0.15 m.y.). Fine structure, if rear, may record brief (<100 kyr) excursions of (SrSr)-Sr-87-Sr-86 from values expected from the overall trends. In Lagerdorf, the boundary between the Santonian and Campanian stages, taken here as the level of first occurrence of the belemnite Gonioteuthis granulataquadrata, has an Sr-87/Sr-86 Of 0.707473 +/- 5. This is within error of the values of 0.707457 +/- 16 for this boundary in the U.S. western interior (base of the Scaphites leei III zone) and 0.707479 +/- 9 for this boundary in the English Chalk (top of the Marsupites testudinarius zone). In Kronsmoor, the boundary between the Campanian and Maastrichtian stages, taken here as the level of first occurrence of the belemnite Belemnella lanceolata, has an Sr-87/Sr-86 of 0.707723 +/- 4. This is within error of the values of 0.707725 +/- 20 for this boundary in the U.S. western interior (base of the Baculites eliasi zone) and 0.707728 +/- 5 for this boundary in the English Chalk (defined as in Germany)
Are Targets for Renewable Portfolio Standards Too Low? The Impact of Market Structure on Energy Policy
In order to limit climate change from greenhouse gas emissions, governments have introduced renewable portfolio standards (RPS) to incentivise renewable energy production. While the response of industry to exogenous RPS targets has been addressed in the literature, setting RPS targets from a policymaker’s perspective has remained an open question. Using a bi-level model, we prove that the optimal RPS target for a perfectly competitive electricity industry is higher than that for a benchmark centrally planned one. Allowing for market power by the non-renewable energy sector within a deregulated industry lowers the RPS target vis-à -vis perfect competition. Moreover, to our surprise, social welfare under perfect competition with RPS is lower than that when the non-renewable energy sector exercises market power. In effect, by subsidising renewable energy and taxing the non-renewable sector, RPS represents an economic distortion that over-compensates damage from emissions. Thus, perfect competition with RPS results in “too much” renewable energy output, whereas the market power of the non-renewable energy sector mitigates this distortion, albeit at the cost of lower consumer surplus and higher emissions. Hence, ignoring the interaction between RPS requirements and the market structure could lead to sub-optimal RPS targets and substantial welfare losses
Tradable Performance-Based CO2 Emissions Standards: Walking on Thin Ice?
Climate policy, like climate change itself, is subject to debate. Partially due to the political deadlock in Washington, DC, US climate policy, historically, has been driven mainly by state or regional effort until the recently introduced federal Clean Power Plan (CPP). Instead of a traditional mass-based standard, the US CPP stipulates a state-specific performance-based CO2 emission standard and delegates considerable flexibility to the states in achieving the standard. Typically, there are two sets of policy tools available: a tradable performance-based and a mass-based permit program. We analyze these two related but distinct standards when they are subject to imperfect competition in the product and/or permit markets. Stylized models are developed to produce general conclusions. Detailed models that account for heterogenous technologies and the transmission network are developed to
evaluate policy efficiency. Depending on the scenarios under consideration, the resulting problem could be either a complementarity problem or a Stackelberg leaderfollower
game, which is implemented as a mathematical program with equilibrium constraints (MPEC). We overcome the nonconvexity of MPECs by reformulating them as mixed integer problems. We show that while the cross-subsidy inherent in the performance-based standard that might effectively reduce power prices, it could inflate energy demand, thereby rendering permits scarce. When the leader in a Stackelberg formulation has a relatively clean endowment under the performancebased standard, its ability to manipulate the electricity market as well as to lower
permit prices might worsen the market outcomes compared to its mass-based counterpart. On the other hand, when the leader has a relatively dirty endowment, the "cross-subsidy" could be the dominant force leading to a higher social welfare compared to the mass-based program. This paper contributes to the current policy debates in regulating emissions from the US power sector and highlights different incentives created by the mass- and performance-based standards
Regulatory jurisdiction and policy coordination: A bi-level modeling approach for performance-based environmental policy
This study discusses important aspects of policy modeling based on a leader-follower game of policymakers. We specifically investigate non-cooperation between policymakers and the jurisdictional scope of regulation via bi-level programming. Performance-based environmental policy under the Clean Power Plan in the United States is chosen for our analysis. We argue that the cooperation of policymakers is welfare enhancing. Somewhat counterintuitively, full coordination among policymakers renders performance-based environmental policy redundant. We also find that distinct state-by-state regulation yields higher social welfare than broader regional regulation. This is because power producers can participate in a single power market even under state-by-state environmental regulation and arbitrage away the CO2 price differences by adjusting their generation across states. Numerical examples implemented for a stylized test network illustrate the theoretical findings
Flood Impact Assessment Literature Review
This report is a literature review on flood damage approaches and models with suggestion for model
adaption, including the report on assessed damages in case study cities.The work described in this publication was supported by the European Community’s Seventh Framework Programme through the grant to the budget of CORFU
Collaborative Research on Flood Resilience in Urban Areas, Contract 244047
A comparison of three dual drainage models: Shallow Water vs Local Inertial vs Diffusive Wave
This is the author accepted manuscript. The final version is available from IWA Publishing via the DOI in this record.In this study we compared three overland flow models, a full dynamic model (SWE), a local
inertial equations model (GWM), and a diffusive wave model (PDWAVE). The three models are
coupled with the same full dynamic sewer network model (SIPSON). We adopted the volume
exchange between sewer and overland flow models, and the hydraulic head and discharge
rates at the linked manholes to evaluate differences between the models. For that purpose we
developed a novel methodology based on RGB scale. The test results of a real case study show
a close agreement between coupled models in terms of the extents of flooding, depth and
volume exchanged, despite highly complex flows and geometries. The diffusive wave model
gives slightly higher maximum flood depths and a slower propagation of the flood front when
compared to the other two models. The Local inertial model shows to slight extent higher
depths downstream as the wave front is slower than the one in the fully dynamic model.
Overall, the simplified overland models can produce comparable results to fully dynamic
models with less computational costThis research is partially funded by the FCT
(Portuguese Foundation for Science and Technology) through the Doctoral Grant
SFRH/BD/81869/2011 financed through the POPH/FSE program (Programa Operacional
Potencial Humano/Fundo Social Europeu). This study had the support of the Portuguese
Foundation for Science and Technology (FCT) Project UID/MAR/04292/2013 and the UK’s
Natural Environment Research Council (NERC) Project Susceptibility of catchments to INTense
RAinfall and flooding (SINATRA, NE/K008765/1)
Flood Damage Model Guidelines
This report outlines the framework for the damage model that should be applied in the project.
The intended readership of the report is project partners who will be assessing flood damage in the
different case study cities.
The model outlined in the report deals with direct tangible damage, and indirect tangible and
intangible damage will be described in detail in other deliverables.
This report outlines the general principles that should be adhered to in assessing flood damage.
Recommendations are provided on the appropriate scale of modelling that should be adopted. The
report then goes on to outline the categories of assets that should be considered in assessing direct
tangible damage. The report also provides recommendations on the data that should be sought to
estimate the value of the assets at risk and the damage functions, which relate to the
characteristics of the flooding.
Finally, the report concludes with recommendations on how to estimate the Expected Annual
Damage.
An appendix is included, which provides the technical details of the modelling tool that has been
developed on a trial site in Dhaka in Bangladesh. This tool can be applied on a GIS software
platform. The details of the algorithms have been provided so that they can be applied in different
software packages, if necessary. The tool will be updated as further progress is madeThe work described in this publication was supported by the European Community’s Seventh Framework Programme through the grant to the budget of CORFU
Collaborative Research on Flood Resilience in Urban Areas, Contract 244047
A novel approach to flood risk assessment: the Exposure-Vulnerability matrices
The classical approach to flood defence, focused on reducing the probability of flooding through hard defences, has been gradually substituted by flood risk management approach, which accepts the idea of coping with floods, and aims at reducing both probability and the consequences of flooding. In this view, the concept of vulnerability becomes central, such as the (non-structural) measures for its increment. However, the evaluations for the effectiveness and methods of non-structural measure and the vulnerability are less studied, compared to the structural solutions. In this paper, we adopted the Longano catchment in Sicily, Italy, as the case study. The methodology developed in the work enabled a qualitative evaluation of the consequences of floods, based on a crisscross analysis of vulnerability curves and classes of exposure for assets at risk. A GIS-based tool was used to evaluate each element at risk inside an Exposure-Vulnerability matrix. The construction of an E-V matrix allowed a better understanding of the actual situation within a catchment and the effectiveness of non-structural measures for a site. Referring directly to vulnerability can also estimate the possible consequences of an event even in those catchments where the damage data are absent. The instrument proposed can be useful for authorities responsible for development and periodical review of adaptive flood risk management plans
- …