There is increasingly clear evidence that human influence has contributed significantly to the large-scale climatic changes that have occurred over the past few decades. Attention is now turning to the physical implications of the emerging anthropogenic signal. Of particular interest is the question of whether current climate models may be over- or under-estimating the amplitude of the climate system's response to external forcing, including anthropogenic. Evidence of a significant error in a model-simulated response amplitude would indicate the existence of amplifying or damping mechanisms that are inadequately represented in the model. The range of uncertainty in the factor by which we can scale model-simulated changes while remaining consistent with observed change provides an estimate of uncertainty in model-based predictions. With any model that displays a realistic level of internal variability, the problem of estimating this factor is complicated by the fact that it represents a ratio between two incompletely known quantities: both observed and simulated responses are subject to sampling uncertainty, primarily due to internal chaotic variability. Sampling uncertainty in the simulated response can be reduced, but not eliminated, through ensemble simulations. Accurate estimation of these scaling factors requires a modification of the standard "optimal fingerprinting" algorithm for climate change detection, drawing on the conventional "total least squares" approach discussed in the statistical literature. Code for both variants of optimal fingerprinting can be found on http://www.climateprediction.net/detection.The full-text of this article is not currently available in ORA, but the original publication is available at springerlink.com (which you may be able to access via the publisher copy link on this record page)
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.