19 research outputs found
Quantifying the effect of interannual ocean variability on the attribution of extreme climate events to human influence
In recent years, the climate change research community has become highly
interested in describing the anthropogenic influence on extreme weather events,
commonly termed "event attribution." Limitations in the observational record
and in computational resources motivate the use of uncoupled,
atmosphere/land-only climate models with prescribed ocean conditions run over a
short period, leading up to and including an event of interest. In this
approach, large ensembles of high-resolution simulations can be generated under
factual observed conditions and counterfactual conditions that might have been
observed in the absence of human interference; these can be used to estimate
the change in probability of the given event due to anthropogenic influence.
However, using a prescribed ocean state ignores the possibility that estimates
of attributable risk might be a function of the ocean state. Thus, the
uncertainty in attributable risk is likely underestimated, implying an
over-confidence in anthropogenic influence.
In this work, we estimate the year-to-year variability in calculations of the
anthropogenic contribution to extreme weather based on large ensembles of
atmospheric model simulations. Our results both quantify the magnitude of
year-to-year variability and categorize the degree to which conclusions of
attributable risk are qualitatively affected. The methodology is illustrated by
exploring extreme temperature and precipitation events for the northwest coast
of South America and northern-central Siberia; we also provides results for
regions around the globe. While it remains preferable to perform a full
multi-year analysis, the results presented here can serve as an indication of
where and when attribution researchers should be concerned about the use of
atmosphere-only simulations
The challenge to detect and attribute effects of climate change on human and natural systems
© The Author(s), 2013. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Climatic Change 121 (2013): 381-395, doi:10.1007/s10584-013-0873-6.Anthropogenic climate change has triggered impacts on natural and human systems world-wide, yet the formal scientific method of detection and attribution has been only insufficiently described. Detection and attribution of impacts of climate change is a fundamentally cross-disciplinary issue, involving concepts, terms, and standards spanning the varied requirements of the various disciplines. Key problems for current assessments include the limited availability of long-term observations, the limited knowledge on processes and mechanisms involved in changing environmental systems, and the widely different concepts applied in the scientific literature. In order to facilitate current and future assessments, this paper describes the current conceptual framework of the field and outlines a number of conceptual challenges. Based on this, it proposes workable cross-disciplinary definitions, concepts, and standards. The paper is specifically intended to serve as a baseline for continued development of a consistent cross-disciplinary framework that will facilitate integrated assessment of the detection and attribution of climate change impacts.Modeling Program of the Office of Biological and Environmental Research in the Department of
Energy Office of Science under contract number DE-AC02-05CH11231. GH was supported by a
grant from the German Ministry for Education and Research
The Detection and Attribution Model Intercomparison Project DAMIP v1.0) contribution to CMIP6
Detection and attribution (D&A) simulations were important components of CMIP5 and underpinned the climate change detection and attribution assessments of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. The primary goals of the Detection and Attribution Model Intercomparison Project (DAMIP) are to facilitate improved estimation of the contributions of anthropogenic and natural forcing changes to observed global warming as well as to observed global and regional changes in other climate variables; to contribute to the estimation of how historical emissions have altered and are altering contemporary climate risk; and to facilitate improved observationally constrained projections of future climate change. D&A studies typically require unforced control simulations and historical simulations including all major anthropogenic and natural forcings. Such simulations will be carried out as part of the DECK and the CMIP6 historical simulation. In addition D&A studies require simulations covering the historical period driven by individual forcings or subsets of forcings only: such simulations are proposed here. Key novel features of the experimental design presented here include firstly new historical simulations with aerosols-only, stratospheric-ozone-only, CO2-only, solar-only, and volcanic-only forcing, facilitating an improved estimation of the climate response to individual forcing, secondly future single forcing experiments, allowing observationally constrained projections of future climate change, and thirdly an experimental design which allows models with and without coupled atmospheric chemistry to be compared on an equal footing
Recommended from our members
Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis:
An ongoing challenge in visual exploration and analysis of large,
multi-dimensional datasets is how to present useful, concise
information to a user for some specific visualization tasks. Typical approaches to this problem have proposed
either reduced-resolution versions of data, or projections of
data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical
metric as the basis for both projections and reduced-resolution
versions of data, with a particular focus on preserving one key trait
in data, namely variation. We use two different case studies to
explore this idea, one that uses a synthetic dataset, and another that
uses a large ensemble collection produced by an atmospheric modeling
code to study long-term changes in global precipitation. The primary
findings of our work are that in terms of preserving the variation
signal inherent in data, that using a statistical measure more
faithfully preserves this key characteristic across both
multi-dimensional projections and multi-resolution representations than
a methodology based upon averaging