15 research outputs found

    Applying a statewide geospatial leaching tool for assessing soil vulnerability ratings for agrochemicals across the contiguous United States

    Get PDF
    A large-scale leaching assessment tool not only illustrates soil (or groundwater) vulnerability in unmonitored areas, but also can identify areas of potential concern for agrochemical contamination. This study describes the methodology of how the statewide leaching tool in Hawaii modified recently for use with pesticides and volatile organic compounds can be extended to the national assessment of soil vulnerability ratings. For this study, the tool was updated by extending the soil and recharge maps to cover the lower 48 states in the United States (US). In addition, digital maps of annual pesticide use (at a national scale) as well as detailed soil properties and monthly recharge rates (at high spatial and temporal resolutions) were used to examine variations in the leaching (loads) of pesticides for the upper soil horizons. Results showed that the extended tool successfully delineated areas of high to low vulnerability to selected pesticides. The leaching potential was high for picloram, medium for simazine, and low to negligible for 2,4-D and glyphosate. The mass loadings of picloram moving below 0.5 m depth increased greatly in northwestern and central US that recorded its extensive use in agricultural crops. However, in addition to the amount of pesticide used, annual leaching load of atrazine was also affected by other factors that determined the intrinsic aquifer vulnerability such as soil and recharge properties. Spatial and temporal resolutions of digital maps had a great effect on the leaching potential of pesticides, requiring a trade-off between data availability and accuracy. Potential applications of this tool include the rapid, large-scale vulnerability assessments for emerging contaminants which are hard to quantify directly through vadose zone models due to lack of full environmental data

    Carbon dynamics and export from flooded wetlands: A modeling approach

    Get PDF
    Described in this article is development and validation of a process based model for carbon cycling in flooded wetlands, called WetQual-C. The model considers various biogeochemical interactions affecting C cycling, greenhouse gas emissions, organic carbon export and retention. WetQual-C couples carbon cycling with other interrelated geochemical cycles in wetlands, i.e. nitrogen and oxygen; and fully reflects the dynamics of the thin oxidized zone at the soil-water interface. Using field collected data from a small wetland receiving runoff from an agricultural watershed on the eastern shore of Chesapeake Bay, we assessed model performance and carried out a thorough sensitivity and uncertainty analysis to evaluate the credibility of the model. Overall, model performed well in capturing TOC export fluctuations and dynamics from the study wetland. Model results revealed that over a period of 2 years, the wetland removed or retained equivalent to 47 ± 12% of the OC carbon intake, mostly via OC decomposition and DOC diffusion to sediment. The study wetland appeared as a carbon sink rather than source and proved its purpose as a relatively effective and low cost mean for improving water quality

    An Integrated Approach for Modeling Wetland Water Level: Application to a Headwater Wetland in Coastal Alabama, USA

    No full text
    Headwater wetlands provide many benefits such as water quality improvement, water storage, and providing habitat. These wetlands are characterized by water levels near the surface and respond rapidly to rainfall events. Driven by both groundwater and surface water inputs, water levels (WLs) can be above or below the ground at any given time depending on the season and climatic conditions. Therefore, WL predictions in headwater wetlands is a complex problem. In this study a hybrid modeling approach was developed for improved WL predictions in wetlands, by coupling a watershed model with artificial neural networks (ANNs). In this approach, baseflow and stormflow estimates from the watershed draining to a wetland are first estimated using an uncalibrated Soil and Water Assessment Tool (SWAT). These estimates are then combined with meteorological variables and are utilized as inputs to an ANN model for predicting daily WLs in wetlands. The hybrid model was used to successfully predict WLs in a headwater wetland in coastal Alabama, USA. The model was then used to predict the WLs at the study wetland from 1951 to 2005 to explore the possible teleconnections between the El Niño Southern Oscillation (ENSO) and WLs. Results show that both precipitation and the variations in WLs are partially affected by ENSO in the study area. A correlation analysis between seasonal precipitation and the Nino 3.4 Index suggests that winters are wetter during El Niño in Coastal Alabama. Analysis also revealed a significant negative correlation between WLs and the Nino 3.4 Index during the El Niño phase for spring. The findings of this study and the developed methodology/tools are useful to predict long-term WLs in wetlands and construct more accurate restoration plans under a variable climate

    A Machine Learning Approach to Predict Watershed Health Indices for Sediments and Nutrients at Ungauged Basins

    No full text
    Effective water quality management and reliable environmental modeling depend on the availability, size, and quality of water quality (WQ) data. Observed stream water quality data are usually sparse in both time and space. Reconstruction of water quality time series using surrogate variables such as streamflow have been used to evaluate risk metrics such as reliability, resilience, vulnerability, and watershed health (WH) but only at gauged locations. Estimating these indices for ungauged watersheds has not been attempted because of the high-dimensional nature of the potential predictor space. In this study, machine learning (ML) models, namely random forest regression, AdaBoost, gradient boosting machines, and Bayesian ridge regression (along with an ensemble model), were evaluated to predict watershed health and other risk metrics at ungauged hydrologic unit code 10 (HUC-10) basins using watershed attributes, long-term climate data, soil data, land use and land cover data, fertilizer sales data, and geographic information as predictor variables. These ML models were tested over the Upper Mississippi River Basin, the Ohio River Basin, and the Maumee River Basin for water quality constituents such as suspended sediment concentration, nitrogen, and phosphorus. Random forest, AdaBoost, and gradient boosting regressors typically showed a coefficient of determination R2>0.8 for suspended sediment concentration and nitrogen during the testing stage, while the ensemble model exhibited R2>0.95. Watershed health values with respect to suspended sediments and nitrogen predicted by all ML models including the ensemble model were lower for areas with larger agricultural land use, moderate for areas with predominant urban land use, and higher for forested areas; the trained ML models adequately predicted WH in ungauged basins. However, low WH values (with respect to phosphorus) were predicted at some basins in the Upper Mississippi River Basin that had dominant forest land use. Results suggest that the proposed ML models provide robust estimates at ungauged locations when sufficient training data are available for a WQ constituent. ML models may be used as quick screening tools by decision makers and water quality monitoring agencies for identifying critical source areas or hotspots with respect to different water quality constituents, even for ungauged watersheds

    Evaluation of hydrological models at gauged and ungauged basins using machine learning-based limits-of-acceptability and hydrological signatures

    No full text
    Hydrological models are evaluated by comparisons with observed hydrological quantities such as streamflow. A model evaluation procedure should account for dominantly epistemic errors in hydrological data such as model input precipitation and streamflow and avoid type-2 errors (rejecting a good model). This study uses quantile random forest (QRF) to develop limits-of-acceptability (LoA) over streamflows that account for uncertainties in precipitation and streamflow values. A significant advantage of this method is that it can be used to evaluate models even at ungauged basins. This method was used to evaluate a hydrological model –Sacramento Soil Moisture Accounting (SAC-SMA) – over the St. Joseph River Watershed (SJRW) for both gauged and hypothetical ungauged scenarios. QRF defined wide LoAs that yielded a large number of models as behavioral, suggesting the need for additional measures to develop a more discriminating inference procedure. The paper discusses why the LoAs defined by QRF were wide, along with some ways to define more discriminating LoAs. To further constrain the model, five streamflow-based signatures (i.e., autocorrelation function, Hurst exponent, baseflow index, flow duration curve, and long-term runoff coefficient) were used. The combination of LoAs over streamflow and streamflow-based signatures helped constrain the set of behavioral models in both the gauged and the ungauged scenarios. Among the signatures used in this study, the Hurst exponent and baseflow index were the most useful ones. All the 1-million models evaluated in this study were eventually rejected as unfit-for-purpose
    corecore