55 research outputs found

    New priorities for climate science and climate economics in the 2020s

    Get PDF
    Climate science and climate economics are critical sources of expertise in our pursuit of the Sustainable Development Goals. Effective use of this expertise requires a strengthening of its epistemic foundations and a renewed focus on more practical policy problems

    On the physics of three integrated assessment models

    Get PDF
    Differing physical assumptions are embedded in an important class of integrated assessment models. Reverse-engineering a common description of their underlying physics facilitates inter-comparisons that separate economic and physical uncertainties. Integrated assessment models (IAMs) are the main tools for combining physical and economic analyses to develop and assess climate change policy. Policy makers have relied heavily on three IAMs in particular—DICE, FUND, and PAGE—when trying to balance the benefits and costs of climate action. Unpacking the physics of these IAMs accomplishes four things. Firstly, it reveals how the physics of these IAMs differ, and the extent to which those differences give rise to different visions of the human and economic costs of climate change. Secondly, it makes these IAMs more accessible to the scientific community and thereby invites further physical expertise into the IAM community so that economic assessments of climate change can better reflect the latest physical understanding of the climate system. Thirdly, it increases the visibility of the link between the physical sciences and the outcomes of policy assessments so that the scientific community can focus more sharply on those unresolved questions that loom largest in policy assessments. And finally, in making explicit the link between these IAMs and the underlying physical models, one gains the ability to translate between IAMs using a common physical language. This translation-key will allow multi-model policy assessments to run all three models with physically comparable baseline scenarios, enabling the economic sources of uncertainty to be isolated and facilitating a more informed debate about the most appropriate mitigation pathway

    Assessing pricing assumptions for weather index insurance in a changing climate

    Get PDF
    Weather index insurance is being offered to low-income farmers in developing countries as an alternative to traditional multi-peril crop insurance. There is widespread support for index insurance as a means of climate change adaptation but whether or not these products are themselves resilient to climate change has not been well studied. Given climate variability and climate change, an over-reliance on historical climate observations to guide the design of such products can result in premiums which mislead policyholders and insurers alike, about the magnitude of underlying risks. Here, a method to incorporate different sources of climate data into the product design phase is presented. Bayesian Networks are constructed to demonstrate how insurers can assess the product viability from a climate perspective, using past observations and simulations of future climate. Sensitivity analyses illustrate the dependence of pricing decisions on both the choice of information, and the method for incorporating such data. The methods and their sensitivities are illustrated using a case study analysing the provision of index-based crop insurance in Kolhapur, India. We expose the benefits and limitations of the Bayesian Network approach, weather index insurance as an adaptation measure and climate simulations as a source of quantitative predictive information. Current climate model output is shown to be of limited value and difficult to use by index insurance practitioners. The method presented, however, is shown to be an effective tool for testing pricing assumptions and could feasibly be employed in the future to incorporate multiple sources of climate data

    An assessment of the foundational assumptions inhigh-resolution climate projections: the case of UKCP09

    Get PDF
    The United Kingdom Climate Impacts Programme’s UKCP09 project makes highresolution projections of the climate out to 2100 by post-processing the outputs of a large-scale global climate model. The aim of this paper is to describe and analyse the methodology used and then urge some caution. Given the acknowledged systematic, shared shortcomings in all current climate models, treating model outputs as decision relevant projections can be significantly misleading. In extrapolatory situations, such as projections of future climate change impacts, there is little reason to expect that postprocessing of model outputs can correct for the consequences of such errors. This casts doubt on our ability, today, to make trustworthy, high-resolution probabilistic projections out to the end of this century

    The evolution of a non-autonomous chaotic system under non-periodic forcing: a climate change example

    Full text link
    Complex Earth System Models are widely utilised to make conditional statements about the future climate under some assumptions about changes in future atmospheric greenhouse gas concentrations; these statements are often referred to as climate projections. The models themselves are high-dimensional nonlinear systems and it is common to discuss their behaviour in terms of attractors and low-dimensional nonlinear systems such as the canonical Lorenz `63 system. In a non-autonomous situation, for instance due to anthropogenic climate change, the relevant object is sometimes considered to be the pullback or snapshot attractor. The pullback attractor, however, is a collection of {\em all} plausible states of the system at a given time and therefore does not take into consideration our knowledge of the current state of the Earth System when making climate projections, and are therefore not very informative regarding annual to multi-decadal climate projections. In this article, we approach the problem of measuring and interpreting the mid-term climate of a model by using a low-dimensional, climate-like, nonlinear system with three timescales of variability, and non-periodic forcing. We introduce the concept of an {\em evolution set} which is dependent on the starting state of the system, and explore its links to different types of initial condition uncertainty and the rate of external forcing. We define the {\em convergence time} as the time that it takes for the distribution of one of the dependent variables to lose memory of its initial conditions. We suspect a connection between convergence times and the classical concept of mixing times but the precise nature of this connection needs to be explored. These results have implications for the design of influential climate and Earth System Model ensembles, and raise a number of issues of mathematical interest.Comment: The model output data used in this study is freely available on Zenodo: https://doi.org/10.5281/zenodo.836802

    The missing risks of climate change

    Get PDF
    The risks of climate change are enormous, threatening the lives and livelihoods of millions to billions. The economic consequences of many of the complex risks associated with climate change cannot, however, currently be quantified. We argue that these unquantified, poorly understood, and often deeply uncertain risks can and should be included in economic evaluations and decision making processes. We present an overview of these unquantified risks and an ontology of them founded on the reasons behind their lack of robust evaluation. These consist of risks missing due to (a) delays in sharing knowledge and expertise across disciplines, (b) spatial and temporal variations of climate impacts, (c) feedbacks and interactions between risks, (d) deep uncertainty in our knowledge, and (e) currently unidentified risks. We highlight collaboration needs within and between the natural and social science communities to address these gaps. We also provide an qpproach for integrating assessments or speculations of these risks in a way which accounts for interdependencies, avoids double counting and makes assumptions clear. Multiple paths exist for engaging with these missing risks, with both model-based quantification and non-model-based qualitative assessments playing crucial roles

    Limits to the quantification of local climate change

    Get PDF
    We demonstrate how the fundamental timescales of anthropogenic climate change limit the identification of societally relevant aspects of changes in precipitation. We show that it is nevertheless possible to extract, solely from observations, some confident quantified assessments of change at certain thresholds and locations. Maps of such changes, for a variety of hydrologically-relevant, threshold-dependent metrics, are presented. In places in Scotland, for instance, the total precipitation on heavy rainfall days in winter has increased by more than 50%, but only in some locations has this been accompanied by a substantial increase in total seasonal precipitation; an important distinction for water and land management. These results are important for the presentation of scientific data by climate services, as a benchmark requirement for models which are used to provide projections on local scales, and for process-based climate and impacts research to understand local modulation of synoptic and global scale climate. They are a critical foundation for adaptation planning and for the scientific provision of locally relevant information about future climate

    Assessing the quality of regional climate information

    Get PDF
    There are now a plethora of data, models, and approaches available to produce regional and local climate information intended to inform adaptation to a changing climate. There is, however, no framework to assess the quality of these data, models, and approaches that takes into account the issues that arise when this information is produced. An evaluation of the quality of regional climate information is a fundamental requirement for its appropriate application in societal decision-making. Here, an analytical framework is constructed for the quality assessment of science-based statements and estimates about future climate. This framework targets statements that project local and regional climate at decadal and longer time scales. After identifying the main issues with evaluating and presenting regional climate information, it is argued that it is helpful to consider the quality of statements about future climate in terms of 1) the type of evidence and 2) the relationship between the evidence and the statement. This distinction not only provides a more targeted framework for quality, but also shows how certain evidential standards can change as a function of the statement under consideration. The key dimensions to assess regional climate information quality are diversity, completeness, theory, adequacy for purpose, and transparency. This framework is exemplified using two research papers that provide regional climate information and the implications of the framework are explored

    Assessing the quality of state-of-the-art regional climate information: the case of the UK Climate Projections 2018

    Get PDF
    In this paper, we assess the quality of state-of-the-art regional climate information intended to support climate adaptation decision-making. We use the UK Climate Projections 2018 as an example of such information. Their probabilistic, global, and regional land projections exemplify some of the key methodologies that are at the forefront of constructing regional climate information for decision support in adapting to a changing climate. We assess the quality of the evidence and the methodology used to support their statements about future regional climate along six quality dimensions: transparency; theory; independence, number, and comprehensiveness of evidence; and historical empirical adequacy. The assessment produced two major insights. First, a major issue that taints the quality of UKCP18 is the lack of transparency, which is particularly problematic since the information is directed towards non-expert users who would need to develop technical skills to evaluate the quality and epistemic reliability of this information. Second, the probabilistic projections are of lower quality than the global projections because the former lack both transparency and a theory underpinning the method used to produce quantified uncertainty estimates about future climate. The assessment also shows how different dimensions are satisfied depending on the evidence used, the methodology chosen to analyze the evidence, and the type of statements that are constructed in the different strands of UKCP18. This research highlights the importance of knowledge quality assessment of regional climate information that intends to support climate change adaptation decisions
    • …
    corecore