5 research outputs found

    Observed and Projected Snowmelt Runoff in the Upper Rio Grande in a Changing Climate

    No full text
    As climate has warmed over the past half century, the strength of the covariance between interannual snowpack and streamflow anomalies in the Rio Grande headwaters has decreased. This change has caused an amplification of errors in seasonal streamflow forecasts using traditional statistical forecasting methods, based on the diminishing correlation between peak snow water equivalent (SWE) and subsequent snowmelt runoff. Therefore, at a time when water resources in south-western North America are becoming scarcer, water supply forecasters need to develop prediction schemes that account for the dynamic nature of the relationship between precipitation, temperature, snowpack and streamflow. We quantify temporal changes in statistical predictive models of streamflow in the upper Rio Grande basin using observed data, and interpret the results in terms of processes that control runoff season discharge. We then compare these observed changes to corresponding statistics in downscaled global climate models (GCMs), to gain insight into which GCMs most appropriately replicate the dynamics of interannual streamflow variability represented by the hydro-climate parameters in the headwaters of the Rio Grande. We quantify how the correlations among temperature, precipitation, SWE, and v streamflow have changed over the last half century within the local climatic and hydrological system. We then assess different long-term GCM-based streamflow projections by their ability to reproduce observed relationships between climate and streamflow, and thereby better constrain projections of future flows as climate warms in the 21st century. In the Rio Grande system, we find that spring season precipitation increasingly contributes to the variability of runoff generation as the contribution of snowpack declines

    Evaluating Largeā€Storm Dominance in Highā€Resolution GCMs and Observations Across the Western Contiguous United States

    No full text
    Abstract Extreme precipitation events are projected to increase in frequency across much of the landā€surface as the global climate warms, but such projections have typically relied on coarseā€resolution (100ā€“250Ā km) general circulation models (GCMs). The ensemble of HighResMIP GCMs presents an opportunity to evaluate how a more finely resolved atmosphere and landā€surface might enhance the fidelity of the simulated contribution of largeā€magnitude storms to total precipitation, particularly across topographically complex terrain. Here, the simulation of largeā€storm dominance, that is, the number of wettest days to reach half of the total annual precipitation, is quantified across the western United States (WUS) using four GCMs within the HighResMIP ensemble and their coarse resolution counterparts. Historical GCM simulations (1950ā€“2014) are evaluated against a baseline generated from stationā€observed daily precipitation (4,803 GHCNā€D stations) and from three gridded, observationally based precipitation data sets that are coarsened to match the resolution of the GCMs. All coarseā€resolution simulations produce less largeā€storm dominance than in observations across the WUS. For two of the four GCMs, bias in the median largeā€storm dominance is reduced in the HighResMIP simulation, decreasing by as much as 62% in the intermountain west region. However, the other GCMs show little change or even an increase (+28%) in bias of median largeā€storm dominance across multiple subā€regions. The spread in differences with resolution amongst GCMs suggests that, in addition to resolution, model structure and parameterization of precipitation generating processes also contribute to bias in simulated largeā€storm dominance
    corecore