80 research outputs found

    Uncertainty in water transit time estimation with StorAge Selection functions and tracer data interpolation

    Get PDF
    Transit time distributions (TTDs) of streamflow are useful descriptors for understanding flow and solute transport in catchments. Catchment-scale TTDs can be modeled using tracer data (e.g. oxygen isotopes, such as δ18O) in inflow and outflows by employing StorAge Selection (SAS) functions. However, tracer data are often sparse in space and time, so they need to be interpolated to increase their spatiotemporal resolution. Moreover, SAS functions can be parameterized with different forms, but there is no general agreement on which one should be used. Both of these aspects induce uncertainty in the simulated TTDs, and the individual uncertainty sources as well as their combined effect have not been fully investigated. This study provides a comprehensive analysis of the TTD uncertainty resulting from 12 model setups obtained by combining different interpolation schemes for δ18O in precipitation and distinct SAS functions. For each model setup, we found behavioral solutions with satisfactory model performance for in-stream δ18O (KGEĝ€¯>ĝ€¯0.55, where KGE refers to the Kling-Gupta efficiency). Differences in KGE values were statistically significant, thereby showing the relevance of the chosen setup for simulating TTDs. We found a large uncertainty in the simulated TTDs, represented by a large range of variability in the 95ĝ€¯% confidence interval of the median transit time, varying at the most by between 259 and 1009ĝ€¯d across all tested setups. Uncertainty in TTDs was mainly associated with the temporal interpolation of δ18O in precipitation, the choice between time-variant and time-invariant SAS functions, flow conditions, and the use of nonspatially interpolated δ18O in precipitation. We discuss the implications of these results for the SAS framework, uncertainty characterization in TTD-based models, and the influence of the uncertainty for water quality and quantity studies

    Upscaling nitrogen removal capacity from local hotspots to low stream orders’ drainage basins

    Get PDF
    International audienceDenitrification is the main process removing nitrate in river drainage basins and buffer input from agricultural land and limits aquatic ecosystem pollution. However, the identification of denitrification hotspots (for example, riparian zones), their role in a landscape context and the evolution oftheir overall removal capacity at the drainage basin scale are still challenging. The main approaches used (that is, mass balance method, denitrification proxies, and potential wetted areas) suffer from methodological drawbacks. We review these approaches and the key frameworks that have been proposed to date to formalize the understanding of the mechanisms driving denitrification: (i) Diffusion versus advection pathways of nitrate transfer, (ii) the biogeochemical hotspot, and (iii) the Damköhler ratio. Based on these frameworks, we propose to use high-resolution mapping of catchment topography and landscape pattern to define both potential denitrification sites and the dynamic hydrologic modeling at a similar spatial scale (<10 km2). It would allow the quantification of cumulative denitrification activity at the small catchment scale, using spatially distributed Damköhler and Peclet numbers and biogeochemical proxies. Integration of existing frameworks with new tools and methods offers the potential for significant breakthroughs in the quantification and modeling of denitrification in small drainage basins. This can provide a basis for improved protection and restoration of surface water and groundwater quality

    Towards Application of StorAge Selection Functions in Large-Scale Catchments with Heterogeneous Travel Times and Subsurface Reactivity

    Get PDF
    StorAge Selection (SAS) functions describe how a catchment selectively removes water and solute of different ages via discharge, thus controlling transit time distributions (TTDs) and solute composition of discharge. Previous studies have successfully applied SAS functions in a spatially lumped approach to capture catchment-scale transport phenomena of (non-)conservative solutes. The lumped approach assumes that water and solutes within a water parcel of a specific age are well-mixed. While this assumption does not cause any changes in the age of water, the spatial heterogeneity of solute concentrations within this water parcel is lost. In addition, in large catchments, headwater sub-catchments and lowland sub-catchments could behave in different ways, e.g., the transit times (TTs) and reaction rates between headwater and lowland sub-catchment could be of different magnitudes. This, in turn, might not be sufficiently represented in a lumped approach of SAS functions. In this study, we applied the mHM-SAS model (Nguyen et al., 2020) with a semi-distributed approach of SAS functions. The nested mesoscale catchment (Selke catchment, Germany) with heterogeneous land use management practices, TTs, and subsurface reactivity was used as a case study. In addition to spatial variability, a functional relationship between the parameters of the SAS functions and storage dynamics was introduced to capture temporal dynamics of the selection preference for discharge. High frequency instream nitrate data were used to validate the proposed approach. Results show that the proposed approach can well represent nitrate export at both sub-catchment and catchment levels. The model reveals that catchment nitrate export is controlled by (1) the headwater sub-catchment with fast TTs and a high denitrification rate, and (2) the lowland sub-catchment with longer TTs and a low denitrification rate. In general, the proposed approach serves as a promising tool for understanding the interplay of transport and reaction times between different sub-catchments, which controls nitrate export in a mesoscale heterogeneous catchment

    Droughts can reduce the nitrogen retention capacity of catchments

    Get PDF
    In 2018-2019, Central Europe experienced an unprecedented 2-year drought with severe impacts on society and ecosystems. In this study, we analyzed the impact of this drought on water quality by comparing long-Term (1997-2017) nitrate export with 2018-2019 export in a heterogeneous mesoscale catchment. We combined data-driven analysis with process-based modeling to analyze nitrogen retention and the underlying mechanisms in the soils and during subsurface transport. We found a drought-induced shift in concentration-discharge relationships, reflecting exceptionally low riverine nitrate concentrations during dry periods and exceptionally high concentrations during subsequent wet periods. Nitrate loads were up to 73ĝ€¯% higher compared to the long-Term load-discharge relationship. Model simulations confirmed that this increase was driven by decreased denitrification and plant uptake and subsequent flushing of accumulated nitrogen during rewetting. Fast transit times (20 years) inhibited a fast response but potentially contribute to a long-Term drought legacy. Overall, our study reveals that severe droughts, which are predicted to become more frequent across Europe, can reduce the nitrogen retention capacity of catchments, thereby intensifying nitrate pollution and threatening water quality

    Droughts can reduce the nitrogen retention capacity of catchments

    Get PDF
    In 2018–2019, Central Europe experienced an unprecedented multi-year drought with severe impacts on society and ecosystems. In this study, we analyzed the impact of this drought on water quality by comparing long-term (1997-2017) nitrate export with 2018–2019 export in a heterogeneous mesoscale catchment. We combined data-driven analysis with process-based modelling to analyze nitrogen retention and the underlying mechanisms in the soils and during subsurface transport. We found a drought-induced shift in concentration-discharge relationships, reflecting exceptionally low riverine nitrate concentrations during dry periods and exceptionally high concentrations during subsequent wet periods. Nitrate loads were up to 70% higher compared to the long-term load-discharge relationship. Model simulations confirmed that this increase was driven by decreased denitrification and plant uptake and subsequent flushing of accumulated nitrogen during rewetting. Fast transit times (20 years) inhibited a fast response but potentially contribute to a long-term drought legacy. Overall, our study reveals that severe multi-year droughts, which are predicted to become more frequent across Europe, can reduce the nitrogen retention capacity of catchments, thereby intensifying nitrate pollution and threatening water quality

    Advancing measurements and representations of subsurface heterogeneity and dynamic processes: towards 4D hydrogeology

    Get PDF
    Essentially all hydrogeological processes are strongly influenced by the subsurface spatial heterogeneity and the temporal variation of environmental conditions, hydraulic properties, and solute concentrations. This spatial and temporal variability generally leads to effective behaviors and emerging phenomena that cannot be predicted from conventional approaches based on homogeneous assumptions and models. However, it is not always clear when, why, how, and at what scale the 4D (3D + time) nature of the subsurface needs to be considered in hydrogeological monitoring, modeling, and applications. In this paper, we discuss the interest and potential for the monitoring and characterization of spatial and temporal variability, including 4D imaging, in a series of hydrogeological processes: (1) groundwater fluxes, (2) solute transport and reaction, (3) vadose zone dynamics, and (4) surface–subsurface water interactions. We first identify the main challenges related to the coupling of spatial and temporal fluctuations for these processes. We then highlight recent innovations that have led to significant breakthroughs in high-resolution space–time imaging and modeling the characterization, monitoring, and modeling of these spatial and temporal fluctuations. We finally propose a classification of processes and applications at different scales according to their need and potential for high-resolution space–time imaging. We thus advocate a more systematic characterization of the dynamic and 3D nature of the subsurface for a series of critical processes and emerging applications. This calls for the validation of 4D imaging techniques at highly instrumented observatories and the harmonization of open databases to share hydrogeological data sets in their 4D components

    Organizational Principles of Hyporheic Exchange Flow and Biogeochemical Cycling in River Networks Across Scales

    Get PDF
    Hyporheic zones increase freshwater ecosystem resilience to hydrological extremes and global environmental change. However, current conceptualizations of hyporheic exchange, residence time distributions, and the associated biogeochemical cycling in streambed sediments do not always accurately explain the hydrological and biogeochemical complexity observed in streams and rivers. Specifically, existing conceptual models insufficiently represent the coupled transport and reactivity along groundwater and surface water flow paths, the role of autochthonous organic matter in streambed biogeochemical functioning, and the feedbacks between surface-subsurface ecological processes, both within and across spatial and temporal scales. While simplified approaches to these issues are justifiable and necessary for transferability, the exclusion of important hyporheic processes from our conceptualizations can lead to erroneous conclusions and inadequate understanding and management of interconnected surface water and groundwater environments. This is particularly true at the landscape scale, where the organizational principles of spatio-temporal dynamics of hyporheic exchange flow (HEF) and biogeochemical processes remain largely uncharacterized. This article seeks to identify the most important drivers and controls of HEF and biogeochemical cycling based on a comprehensive synthesis of findings from a wide range of river systems. We use these observations to test current paradigms and conceptual models, discussing the interactions of local-to-regional hydrological, geomorphological, and ecological controls of hyporheic zone functioning. This improved conceptualization of the landscape organizational principles of drivers of HEF and biogeochemical processes from reach to catchment scales will inform future river research directions and watershed management strategies

    Progression of Late-Onset Stargardt Disease

    Get PDF
    PURPOSE. Identification of sensitive biomarkers is essential to determine potential effects of emerging therapeutic trials for Stargardt disease. This study aimed to describe the natural history of late-onset Stargardt, and demonstrates the accuracy of retinal pigment epithelium (RPE) atrophy progression as an outcome measure. METHODS. We performed a retrospective cohort study collecting multicenter data from 47 patients (91 eyes) with late-onset Stargardt, defined by clinical phenotype, at least one ABCA4 mutation, and age at disease onset >= 45 years. We analyzed RPE atrophy progression on fundus autofluorescence and near-infrared reflectance imaging using semiautomated software and a linear mixed model. We performed sample size calculations to assess the power in a simulated 2-year interventional study and assessed visual endpoints using time-to-event analysis. RESULTS. Over time, progression of RPE atrophy was observed (mean: 0.22 mm/year, 95% confidence interval [CI]: 0.19-0.27). By including only patients with bilateral RPE atrophy in a future trial, 32 patients are needed to reach a power of 83.9% (95% CI: 83.1-84.6), assuming a fixed therapeutic effect size of 30%. We found a median interval between disease onset and visual acuity decline to 20/32, 20/80, and 20/200 of 2.74 (95% CI: 0.54-4.41), 10.15 (95% CI: 6.13-11.38), and 11.38 (95% CI: 6.13-13.34) years, respectively. CONCLUSIONS. We show that RPE atrophy represents a robust biomarker to monitor disease progression in future therapeutic trials. In contrast, the variability in terms of the course of visual acuity was high

    Cytokine Plasma Levels: Reliable Predictors for Radiation Pneumonitis?

    Get PDF
    BACKGROUND: Radiotherapy (RT) is the primary treatment modality for inoperable, locally advanced non-small-cell lung cancer (NSCLC), but even with highly conformal treatment planning, radiation pneumonitis (RP) remains the most serious, dose-limiting complication. Previous clinical reports proposed that cytokine plasma levels measured during RT allow to estimate the individual risk of patients to develop RP. The identification of such cytokine risk profiles would facilitate tailoring radiotherapy to maximize treatment efficacy and to minimize radiation toxicity. However, cytokines are produced not only in normal lung tissue after irradiation, but are also over-expressed in tumour cells of NSCLC specimens. This tumour-derived cytokine production may influence circulating plasma levels in NSCLC patients. The aim of the present study was to investigate the prognostic value of TNF-alpha, IL-1beta, IL-6 and TGF-beta1 plasma levels to predict radiation pneumonitis and to evaluate the impact of tumour-derived cytokine production on circulating plasma levels in patients irradiated for NSCLC. METHODOLOGY/PRINCIPAL FINDINGS: In 52 NSCLC patients (stage I-III) cytokine plasma levels were investigated by ELISA before and weekly during RT, during follow-up (1/3/6/9 months after RT), and at the onset of RP. Tumour biopsies were immunohistochemically stained for IL-6 and TGF-beta1, and immunoreactivity was quantified (grade 1-4). RP was evaluated according to LENT-SOMA scale. Tumour response was assessed according to RECIST criteria by chest-CT during follow-up. In our clinical study 21 out of 52 patients developed RP (grade I/II/III/IV: 11/3/6/1 patients). Unexpectedly, cytokine plasma levels measured before and during RT did not correlate with RP incidence. In most patients IL-6 and TGF-beta1 plasma levels were already elevated before RT and correlated significantly with the IL-6 and TGF-beta1 production in corresponding tumour biopsies. Moreover, IL-6 and TGF-beta1 plasma levels measured during follow-up were significantly associated with the individual tumour responses of these patients. CONCLUSIONS/SIGNIFICANCE: The results of this study did not confirm that cytokine plasma levels, neither their absolute nor any relative values, may identify patients at risk for RP. In contrast, the clear correlations of IL-6 and TGF-beta1 plasma levels with the cytokine production in corresponding tumour biopsies and with the individual tumour responses suggest that the tumour is the major source of circulating cytokines in patients receiving RT for advanced NSCLC

    X-ray screening identifies active site and allosteric inhibitors of SARS-CoV-2 main protease

    Get PDF
    The coronavirus disease (COVID-19) caused by SARS-CoV-2 is creating tremendous human suffering. To date, no effective drug is available to directly treat the disease. In a search for a drug against COVID-19, we have performed a high-throughput X-ray crystallographic screen of two repurposing drug libraries against the SARS-CoV-2 main protease (M^(pro)), which is essential for viral replication. In contrast to commonly applied X-ray fragment screening experiments with molecules of low complexity, our screen tested already approved drugs and drugs in clinical trials. From the three-dimensional protein structures, we identified 37 compounds that bind to M^(pro). In subsequent cell-based viral reduction assays, one peptidomimetic and six non-peptidic compounds showed antiviral activity at non-toxic concentrations. We identified two allosteric binding sites representing attractive targets for drug development against SARS-CoV-2
    corecore