14 research outputs found

    Towards a more reliable historical reanalysis: improvements for version 3 of the Twentieth Century Reanalysis system

    Get PDF
    Historical reanalyses that span more than a century are needed for a wide range of studies, from understanding large‐scale climate trends to diagnosing the impacts of individual historical extreme weather events. The Twentieth Century Reanalysis (20CR) Project is an effort to fill this need. It is supported by the National Oceanic and Atmospheric Administration (NOAA), the Cooperative Institute for Research in Environmental Sciences (CIRES), and the U.S. Department of Energy (DOE), and is facilitated by collaboration with the international Atmospheric Circulation Reconstructions over the Earth initiative. 20CR is the first ensemble of sub‐daily global atmospheric conditions spanning over 100 years. This provides a best estimate of the weather at any given place and time as well as an estimate of its confidence and uncertainty. While extremely useful, version 2c of this dataset (20CRv2c) has several significant issues, including inaccurate estimates of confidence and a global sea level pressure bias in the mid‐19th century. These and other issues can reduce its effectiveness for studies at many spatial and temporal scales. Therefore, the 20CR system underwent a series of developments to generate a significant new version of the reanalysis. The version 3 system (NOAA‐CIRES‐DOE 20CRv3) uses upgraded data assimilation methods including an adaptive inflation algorithm; has a newer, higher‐resolution forecast model that specifies dry air mass; and assimilates a larger set of pressure observations. These changes have improved the ensemble‐based estimates of confidence, removed spin‐up effects in the precipitation fields, and diminished the sea‐level pressure bias. Other improvements include more accurate representations of storm intensity, smaller errors, and large‐scale reductions in model bias. The 20CRv3 system is comprehensively reviewed, focusing on the aspects that have ameliorated issues in 20CRv2c. Despite the many improvements, some challenges remain, including a systematic bias in tropical precipitation and time‐varying biases in southern high‐latitude pressure fields

    Accounting for changing temperature patterns increases historical estimates of climate sensitivity

    Get PDF
    Eight atmospheric general circulation models (AGCMs) are forced with observed historical (1871–2010) monthly sea surface temperature and sea ice variations using the Atmospheric Model Intercomparison Project II data set. The AGCMs therefore have a similar temperature pattern and trend to that of observed historical climate change. The AGCMs simulate a spread in climate feedback similar to that seen in coupled simulations of the response to CO2 quadrupling. However, the feedbacks are robustly more stabilizing and the effective climate sensitivity (EffCS) smaller. This is due to a pattern effect, whereby the pattern of observed historical sea surface temperature change gives rise to more negative cloud and longwave clear‐sky feedbacks. Assuming the patterns of long‐term temperature change simulated by models, and the radiative response to them, are credible; this implies that existing constraints on EffCS from historical energy budget variations give values that are too low and overly constrained, particularly at the upper end. For example, the pattern effect increases the long‐term Otto et al. (2013, https://doi.org/10.1038/ngeo1836) EffCS median and 5–95% confidence interval from 1.9 K (0.9–5.0 K) to 3.2 K (1.5–8.1 K

    A quantification of uncertainties in historical tropical tropospheric temperature trends from radiosondes

    Get PDF
    The consistency of tropical tropospheric temperature trends with climate model expectations remains contentious. A key limitation is that the uncertainties in observations from radiosondes are both substantial and poorly constrained. We present a thorough uncertainty analysis of radiosonde‐based temperature records. This uses an automated homogenization procedure and a previously developed set of complex error models where the answer is known a priori. We perform a number of homogenization experiments in which error models are used to provide uncertainty estimates of real‐world trends. These estimates are relatively insensitive to a variety of processing choices. Over 1979–2003, the satellite‐equivalent tropical lower tropospheric temperature trend has likely (5–95% confidence range) been between −0.01 K/decade and 0.19 K/decade (0.05–0.23 K/decade over 1958–2003) with a best estimate of 0.08 K/decade (0.14 K/decade). This range includes both available satellite data sets and estimates from models (based upon scaling their tropical amplification behavior by observed surface trends). On an individual pressure level basis, agreement between models, theory, and observations within the troposphere is uncertain over 1979 to 2003 and nonexistent above 300 hPa. Analysis of 1958–2003, however, shows consistent model‐data agreement in tropical lapse rate trends at all levels up to the tropical tropopause, so the disagreement in the more recent period is not necessarily evidence of a general problem in simulating long‐term global warming. Other possible reasons for the discrepancy since 1979 are: observational errors beyond those accounted for here, end‐point effects, inadequate decadal variability in model lapse rates, or neglected climate forcings

    Short Communication How do we tell which estimates of past climate change are correct?

    Get PDF
    Estimates of past climate change often involve teasing small signals from imperfect instrumental or proxy records. Success is often evaluated on the basis of the spatial or temporal consistency of the resulting reconstruction, or on the apparent prediction error on small space and time scales. However, inherent methodological trade-offs illustrated here can cause climate signal accuracy to be unrelated, or even inversely related, to such performance measures. This is a form of the classic conflict in statistics between minimum variance and unbiased estimators. Comprehensive statistical simulations based on climate model output are probably the best way to reliably assess whether methods of reconstructing climate from sparse records, such as radiosondes or paleoclimate proxies, actually work on longer time scale

    An Analysis of Tropospheric Humidity Trends from Radiosondes

    Get PDF
    A new analysis of historical radiosonde humidity observations is described. An assessment of both known and unknown instrument and observing practice changes has been conducted to assess their impact on bias and uncertainty in long-term trends. The processing of the data includes interpolation of data to address known sampling bias from missing dry day and cold temperature events, a first-guess adjustment for known radiosonde model changes, and a more sophisticated ensemble of estimates based on 100 neighbor-based homogenizations. At each stage the impact and uncertainty of the process has been quantified. The adjustments remove an apparent drying over Europe and parts of Asia and introduce greater consistency between temperature and specific humidity trends from day and night observations. Interannual variability and trends at the surface are shown to be in good agreement with independent in situ datasets, although some steplike discrepancies are apparent between the time series of relative humidity at the surface. Adjusted trends, accounting for documented and undocumented break points and their uncertainty, across the extratropical Northern Hemisphere lower and midtroposphere show warming of 0.1–0.4 K decade−1 and moistening on the order of 1%–5% decade−1 since 1970. There is little or no change in the observed relative humidity in the same period, consistent with climate model expectation of a positive water vapor feedback in the extratropics with near-constant relative humidity

    Critically Reassessing Tropospheric Temperature Trends from Radiosondes Using Realistic Validation Experiments

    No full text
    Biases and uncertainties in large-scale radiosonde temperature trends in the troposphere are critically reassessed. Realistic validation experiments are performed on an automatic radiosonde homogenization system by applying it to climate model data with four distinct sets of simulated breakpoint profiles. Knowledge of the “truth” permits a critical assessment of the ability of the system to recover the large-scale trends and a reinterpretation of the results when applied to the real observations. The homogenization system consistently reduces the bias in the daytime tropical, global, and Northern Hemisphere (NH) extratropical trends but underestimates the full magnitude of the bias. Southern Hemisphere (SH) extratropical and all nighttime trends were less well adjusted owing to the sparsity of stations. The ability to recover the trends is dependent on the underlying error structure, and the true trend does not necessarily lie within the range of estimates. The implications are that tropical tropospheric trends in the unadjusted daytime radiosonde observations, and in many current upper-air datasets, are biased cold, but the degree of this bias cannot be robustly quantified. Therefore, remaining biases in the radiosonde temperature record may account for the apparent tropical lapse rate discrepancy between radiosonde data and climate models. Furthermore, the authors find that the unadjusted global and NH extratropical tropospheric trends are biased cold in the daytime radiosonde observations. Finally, observing system experiments show that, if the Global Climate Observing System (GCOS) Upper Air Network (GUAN) were to make climate quality observations adhering to the GCOS monitoring principles, then one would be able to constrain the uncertainties in trends at a more comprehensive set of stations. This reaffirms the importance of running GUAN under the GCOS monitoring principles

    Critically Reassessing Tropospheric Temperature Trends from Radiosondes Using Realistic Validation Experiments

    Get PDF
    Biases and uncertainties in large-scale radiosonde temperature trends in the troposphere are critically reassessed. Realistic validation experiments are performed on an automatic radiosonde homogenization system by applying it to climate model data with four distinct sets of simulated breakpoint profiles. Knowledge of the “truth” permits a critical assessment of the ability of the system to recover the large-scale trends and a reinterpretation of the results when applied to the real observations. The homogenization system consistently reduces the bias in the daytime tropical, global, and Northern Hemisphere (NH) extratropical trends but underestimates the full magnitude of the bias. Southern Hemisphere (SH) extratropical and all nighttime trends were less well adjusted owing to the sparsity of stations. The ability to recover the trends is dependent on the underlying error structure, and the true trend does not necessarily lie within the range of estimates. The implications are that tropical tropospheric trends in the unadjusted daytime radiosonde observations, and in many current upper-air datasets, are biased cold, but the degree of this bias cannot be robustly quantified. Therefore, remaining biases in the radiosonde temperature record may account for the apparent tropical lapse rate discrepancy between radiosonde data and climate models. Furthermore, the authors find that the unadjusted global and NH extratropical tropospheric trends are biased cold in the daytime radiosonde observations. Finally, observing system experiments show that, if the Global Climate Observing System (GCOS) Upper Air Network (GUAN) were to make climate quality observations adhering to the GCOS monitoring principles, then one would be able to constrain the uncertainties in trends at a more comprehensive set of stations. This reaffirms the importance of running GUAN under the GCOS monitoring principles
    corecore