6 research outputs found

    The distributed model intercomparison project – Phase 2: Motivation and design of the Oklahoma experiments

    Get PDF
    The Office of Hydrologic Development (OHD) of the National Oceanic and Atmospheric Administration’s (NOAA) National Weather Service (NWS) conducted the second phase of the Distributed Model Intercomparison Project (DMIP 2). After DMIP 1, the NWS recognized the need for additional science experiments to guide its research-to-operations path towards advanced hydrologic models for river and water resources forecasting. This was accentuated by the need to develop a broader spectrum of water resources forecasting products (such as soil moisture) in addition to the more traditional river, flash flood, and water supply forecasts. As it did for DMIP 1, the NWS sought the input and contributions from the hydrologic research community. DMIP 1 showed that using operational precipitation data, some distributed models could indeed perform as well as lumped models in several basins and better than lumped models for one basin. However, in general, the improvements were more limited than anticipated by the scientific community. Models combining so-called conceptual rainfall-runoff mechanisms with physically-based routing schemes achieved the best overall performance. Clear gains were achieved through calibration of model parameters, with the average performance of calibrated models being better than uncalibrated models. DMIP 1 experiments were hampered by temporally-inconsistent precipitation data and few runoff events in the verification period for some basins. Greater uncertainty in modeling small basins was noted, pointing to the need for additional tests of nested basins of various sizes. DMIP 2 experiments in the Oklahoma (OK) region were more comprehensive than in DMIP 1, and were designed to improve our understanding beyond what was learned in DMIP 1. Many more stream gauges were located, allowing for more rigorous testing of simulations at interior points. These included two new gauged interior basins that had drainage areas smaller than the smallest in DMIP 1. Soil moisture and routing experiments were added to further assess if distributed models could accurately model basin-interior processes. A longer period of higher quality precipitation data was available, and facilitated a test to note the impacts of data quality on model calibration. Moreover, the DMIP 2 calibration and verification periods contained more runoff events for analysis. Two lumped models were used to define a robust benchmark for evaluating the improvement of distributed models compared to lumped models. Fourteen groups participated in DMIP 2 using a total of sixteen models. Ten of these models were not in DMIP 1. This paper presents the motivation for DMIP 2 Oklahoma experiments, discusses the major project elements, and describes the data and models used. In addition, the paper introduces the findings, which are covered in a companion results paper (Smith et al., this issue). Lastly, the paper summarizes the DMIP 1 and 2 experiments with commentary from the NWS perspective. Future papers will cover the DMIP 2 experiments in the western USA mountainous basins

    Results of the DMIP 2 Oklahoma experiments

    Get PDF
    Phase 2 of the Distributed Model Intercomparison Project (DMIP 2) was formulated primarily as a mechanism to help guide the US National Weather Service (NWS) as it expands its use of spatially distributed watershed models for operational river, flash flood, and water resources forecasting. The overall purpose of DMIP 2 was to test many distributed models with operational quality data with a view towards meeting NWS operational forecasting needs. At the same time, DMIP 2 was formulated as an experiment that could be leveraged by the broader scientific community as a platform for testing, evaluating, and improving the science of spatially distributed models. This paper presents the key results of the DMIP 2 experiments conducted for the Oklahoma region, which included comparison of lumped and distributed model simulations generated with uncalibrated and calibrated parameters, water balance tests, routing and soil moisture tests, and simulations at interior locations. Simulations from 14 independent groups and 16 models are analyzed. As in DMIP 1, the participant simulations were evaluated against observed hourly streamflow data and compared with simulations generated by the NWS operational lumped model. A wide range of statistical measures are used to evaluate model performance on both run-period and event basis. A noteworthy improvement in DMIP 2 was the combined use of two lumped models to form the benchmark for event improvement statistics, where improvement was measured in terms of runoff volume, peak flow, and peak timing for between 20 and 40 events in each basin. Results indicate that in general, those spatially distributed models that are calibrated to perform well for basin outlet simulations also, in general, perform well at interior points whose drainage areas cover a wide range of scales. Two of the models were able to provide reasonable estimates of soil moisture versus depth over a wide geographic domain and through a period containing two severe droughts. In several parent and interior basins, a few uncalibrated spatially distributed models were able to achieve better goodness-of-fit statistics than other calibrated distributed models, highlighting the strength of those model structures combined with their a priori parameters. In general, calibration solely at basin outlets alone was not able to greatly improve relative model performance beyond that established by using uncalibrated a priori parameters. Further, results from the experiment for returning DMIP 1 participants reinforce the need for stationary data for model calibration: in some cases, the improvements gained by distributed models compared to lumped were not realized when the models were calibrated using inconsistent precipitation data from DMIP 1. Event-average improvement of distributed models over the combined lumped benchmark was measured in terms of runoff volume, peak flow, and peak timing for between 20 and 40 events. The percentage of model-basin pairs having positive distributed model improvement at basin outlets and interior points was 18%, 24%, and 28% respectively, for these quantities. These values correspond to 14%, 33%, and 22% respectively, in DMIP 1. While there may not seem to be much gain compared to DMIP 1 results, the DMIP 2 values were based on more precipitation–runoff events, more model-basin combinations (148 versus 51), more interior ungauged points (9 versus 3), and a benchmark comprised of two lumped model simulations. In addition, we propose a set of statistical measures that can be used to guide the calibration of distributed and lumped models for operational forecasting

    Results of the DMIP 2 Oklahoma experiments

    No full text
    corecore