3 research outputs found

    Explainable Physics-informed Deep Learning for Rainfall-runoff Modeling and Uncertainty Assessment across the Continental United States

    Get PDF
    Hydrologic models provide a comprehensive tool to calibrate streamflow response to environmental variables. Various hydrologic modeling approaches, ranging from physically based to conceptual to entirely data-driven models, have been widely used for hydrologic simulation. During the recent years, however, Deep Learning (DL), a new generation of Machine Learning (ML), has transformed hydrologic simulation research to a new direction. DL methods have recently proposed for rainfall-runoff modeling that complement both distributed and conceptual hydrologic models, particularly in a catchment where data to support a process-based model is scared and limited. This dissertation investigated the applicability of two advanced probabilistic physics-informed DL algorithms, i.e., deep autoregressive network (DeepAR) and temporal fusion transformer (TFT), for daily rainfall-runoff modeling across the continental United States (CONUS). We benchmarked our proposed models against several physics-based hydrologic approaches such as the Sacramento Soil Moisture Accounting Model (SAC-SMA), Variable Infiltration Capacity (VIC), Framework for Understanding Structural Errors (FUSE), Hydrologiska Byråns Vattenbalansavdelning (HBV), and the mesoscale hydrologic model (mHM). These benchmark models can be distinguished into two different groups. The first group are the models calibrated for each basin individually (e.g., SAC-SMA, VIC, FUSE2, mHM and HBV) while the second group, including our physics-informed approaches, is made up of the models that were regionally calibrated. Models in this group share one parameter set for all basins in the dataset. All the approaches were implemented and tested using Catchment Attributes and Meteorology for Large-sample Studies (CAMELS)\u27s Maurer datasets. We developed the TFT and DeepAR with two different configurations i.e., with (physics-informed model) and without (the original model) static attributes. Various catchment static and dynamic physical attributes were incorporated into the pipeline with various spatiotemporal variabilities to simulate how a drainage system responds to rainfall-runoff processes. To demonstrate how the model learned to differentiate between different rainfall–runoff behaviors across different catchments and to identify the dominant process, sensitivity and explainability analysis of modeling outcomes are also performed. Despite recent advancements, deep networks are perceived as being challenging to parameterize; thus, their simulation may propagate error and uncertainty in modeling. To address uncertainty, a quantile likelihood function was incorporated as the TFT loss function. The results suggest that the physics-informed TFT model was superior in predicting high and low flow fluctuations compared to the original TFT and DeepAR models (without static attributes) or even the physics-informed DeepAR. Physics-informed TFT model well recognized which static attributes more contributing to streamflow generation of each specific catchment considering its climate, topography, land cover, soil, and geological conditions. The interpretability and the ability of the physics-informed TFT model to assimilate the multisource of information and parameters make it a strong candidate for regional as well as continental-scale hydrologic simulations. It was noted that both physics-informed TFT and DeepAR were more successful in learning the intermediate flow and high flow regimes rather than the low flow regime. The advantage of the high flow can be attributed to learning a more generalizable mapping between static and dynamic attributes and runoff parameters. It seems both TFT and DeepAR may have enabled the learning of some true processes that are missing from both conceptual and physics-based models, possibly related to deep soil water storage (the layer where soil water is not sensitive to daily evapotranspiration), saturated hydraulic conductivity, and vegetation dynamics

    Reinforcement Learning Policy Gradient Methods for Reservoir Operation Management and Control

    Get PDF
    Changes in demand, various hydrological inputs, and environmental stressors are among issues that water managers and policymakers face on a regular basis. These concerns have sparked interest in applying different techniques to determine reservoir operation policy and improve reservoir release decisions. As the resolution of the analysis rises, it becomes more difficult to effectively represent a real-world system using traditional approaches for determining the best reservoir operation policy. One of the challenges is the “curse of dimensionality,” which occurs when the discretization of the state and action spaces becomes finer or when more state or action variables are taken into account. Because of the dimensionality curse, the number of state-action variables is limited, rendering Dynamic Programming (DP) and Stochastic Dynamic Programming (SDP) ineffective in handling complex reservoir optimization issues. Deep Reinforcement Learning (DRL) is an intelligent approach to overcome the aforementioned curses of stochastic optimization of reservoir system planning. This study examined various novel DRL continuous-action policy gradient methods (PGMs), including Deep Deterministic Policy Gradients (DDPG), Twin Delayed DDPG (TD3), and two different versions of Soft Actor-Critic (SAC18 and SAC19) to identify optimal reservoir operation policy for the Folsom Reservoir located in California, US. The Folsom Reservoir supplies agricultural and municipal water, hydropower, environmental flows, and flood protection to the City of Sacramento. We concluded DRL methods release decisions with respect to these demands as well as by comparing the results to standard operating policy (SOP) and base conditions using different performance criteria and sustainability indices. TD3 and SAC methods have shown promising performance in providing optimal operation policy. Experiments on continuous-action spaces of reservoir operation policy decisions demonstrated that the DRL techniques could efficiently learn strategic policies in space with the curse of dimensionality and modeling

    Spatio-Temporal Assessment of Global Gridded Evapotranspiration Datasets across Iran

    No full text
    Estimating evapotranspiration (ET), the main water output flux within basins, is an important step in assessing hydrological changes and water availability. However, direct measurements of ET are challenging, especially for large regions. Global products now provide gridded estimates of ET at different temporal resolution, each with its own method of estimating ET based on various data sources. This study investigates the differences between ERA5, GLEAM, and GLDAS datasets of estimated ET at gridded points across Iran, and their accuracy in comparison with reference ET. The spatial and temporal discrepancies between datasets are identified, as well as their co-variation with forcing variables. The ET reference values used to check the accuracy of the datasets were based on the water balance (ETwb) from Iran’s main basins, and co-variation of estimated errors for each product with forcing drivers of ET. The results indicate that ETERA5 provides higher base average values and lower maximum annual average values than ETGLEAM. Temporal changes at the annual scale are similar for GLEAM, ERA5, and GLDAS datasets, but differences at seasonal and monthly time scales are identified. Some discrepancies are also recorded in ET spatial distribution, but generally, all datasets provide similarities, e.g., for humid regions basins. ETERA5 has a higher correlation with available energy than available water, while ETGLEAM has higher correlation with available water, and ETGLDAS does not correlate with none of these drivers. Based on the comparison of ETERA5 and ETGLEAM with ETwb, both have similar errors in spatial distribution, while ETGLDAS provided over and under estimations in northern and southern basins, respectively, compared to them (ETERA5 and ETGLEAM). All three datasets provide better ET estimates (values closer to ETWB) in hyper-arid and arid regions from central to eastern Iran than in the humid areas. Thus, the GLEAM, ERA5, and GLDAS datasets are more suitable for estimating ET for arid rather than humid basins in Iran
    corecore