70 research outputs found

    Organism-sediment interactions govern post-hypoxia recovery of ecosystem functioning

    Get PDF
    Hypoxia represents one of the major causes of biodiversity and ecosystem functioning loss for coastal waters. Since eutrophication-induced hypoxic events are becoming increasingly frequent and intense, understanding the response of ecosystems to hypoxia is of primary importance to understand and predict the stability of ecosystem functioning. Such ecological stability may greatly depend on the recovery patterns of communities and the return time of the system properties associated to these patterns. Here, we have examined how the reassembly of a benthic community contributed to the recovery of ecosystem functioning following experimentally-induced hypoxia in a tidal flat. We demonstrate that organism-sediment interactions that depend on organism size and relate to mobility traits and sediment reworking capacities are generally more important than recovering species richness to set the return time of the measured sediment processes and properties. Specifically, increasing macrofauna bioturbation potential during community reassembly significantly contributed to the recovery of sediment processes and properties such as denitrification, bedload sediment transport, primary production and deep pore water ammonium concentration. Such bioturbation potential was due to the replacement of the small-sized organisms that recolonised at early stages by large-sized bioturbating organisms, which had a disproportionately stronger influence on sediment. This study suggests that the complete recovery of organism-sediment interactions is a necessary condition for ecosystem functioning recovery, and that such process requires long periods after disturbance due to the slow growth of juveniles into adult stages involved in these interactions. Consequently, repeated episodes of disturbance at intervals smaller than the time needed for the system to fully recover organism-sediment interactions may greatly impair the resilience of ecosystem functioning.

    Modeling denitrification in aquatic sediments

    Get PDF
    Author Posting. © The Author(s), 2008. This is the author's version of the work. It is posted here by permission of Springer for personal use, not for redistribution. The definitive version was published in Biogeochemistry 93 (2009): 159-178, doi:10.1007/s10533-008-9270-z.Sediment denitrification is a major pathway of fixed nitrogen loss from aquatic systems. Due to technical difficulties in measuring this process and its spatial and temporal variability, estimates of local, regional and global denitrification have to rely on a combination of measurements and models. Here we review approaches to describing denitrification in aquatic sediments, ranging from mechanistic diagenetic models to empirical parameterizations of nitrogen fluxes across the sediment-water interface. We also present a compilation of denitrification measurements and ancillary data for different aquatic systems, ranging from freshwater to marine. Based on this data compilation we reevaluate published parameterizations of denitrification. We recommend that future models of denitrification use (1) a combination of mechanistic diagenetic models and measurements where bottom waters are temporally hypoxic or anoxic, and (2) the much simpler correlations between denitrification and sediment oxygen consumption for oxic bottom waters. For our data set, inclusion of bottom water oxygen and nitrate concentrations in a multivariate regression did not improve the statistical fit.Financial support for AEG to work on the manuscript came from NSF NSF-DEB-0423565. KF, DB and DDT acknowledge support from NOAA CHRP grant NA07NOS4780191

    Benthic pH gradients across a range of shelf sea sediment types linked to sediment characteristics and seasonal variability

    Get PDF
    This study used microelectrodes to record pH profiles in fresh shelf sea sediment cores collected across a range of different sediment types within the Celtic Sea. Spatial and temporal variability was captured during repeated measurements in 2014 and 2015. Concurrently recorded oxygen microelectrode profiles and other sedimentary parameters provide a detailed context for interpretation of the pH data. Clear differences in profiles were observed between sediment type, location and season. Notably, very steep pH gradients exist within the surface sediments (10–20 mm), where decreases greater than 0.5 pH units were observed. Steep gradients were particularly apparent in fine cohesive sediments, less so in permeable sandier matrices. We hypothesise that the gradients are likely caused by aerobic organic matter respiration close to the sediment–water interface or oxidation of reduced species at the base of the oxic zone (NH4+, Mn2+, Fe2+, S−). Statistical analysis suggests the variability in the depth of the pH minima is controlled spatially by the oxygen penetration depth, and seasonally by the input and remineralisation of deposited organic phytodetritus. Below the pH minima the observed pH remained consistently low to maximum electrode penetration (ca. 60 mm), indicating an absence of sub-oxic processes generating H+ or balanced removal processes within this layer. Thus, a climatology of sediment surface porewater pH is provided against which to examine biogeochemical processes. This enhances our understanding of benthic pH processes, particularly in the context of human impacts, seabed integrity, and future climate changes, providing vital information for modelling benthic response under future climate scenarios

    Global fire emissions buffered by the production of pyrogenic carbon

    Get PDF
    Landscape fires burn 3–5 million km2 of the Earth’s surface annually. They emit 2.2 Pg of carbon per year to the atmosphere, but also convert a significant fraction of the burned vegetation biomass into pyrogenic carbon. Pyrogenic carbon can be stored in terrestrial and marine pools for centuries to millennia and therefore its production can be considered a mechanism for long-term carbon sequestration. Pyrogenic carbon stocks and dynamics are not considered in global carbon cycle models, which leads to systematic errors in carbon accounting. Here we present a comprehensive dataset of pyrogenic carbon production factors from field and experimental fires and merge this with the Global Fire Emissions Database to quantify the global pyrogenic carbon production flux. We found that 256 (uncertainty range: 196–340) Tg of biomass carbon was converted annually into pyrogenic carbon between 1997 and 2016. Our central estimate equates to 12% of the annual carbon emitted globally by landscape fires, which indicates that their emissions are buffered by pyrogenic carbon production. We further estimate that cumulative pyrogenic carbon production is 60 Pg since 1750, or 33–40% of the global biomass carbon lost through land use change in this period. Our results demonstrate that pyrogenic carbon production by landscape fires could be a significant, but overlooked, sink for atmospheric CO2

    Perspectives and Integration in SOLAS Science

    Get PDF
    Why a chapter on Perspectives and Integration in SOLAS Science in this book? SOLAS science by its nature deals with interactions that occur: across a wide spectrum of time and space scales, involve gases and particles, between the ocean and the atmosphere, across many disciplines including chemistry, biology, optics, physics, mathematics, computing, socio-economics and consequently interactions between many different scientists and across scientific generations. This chapter provides a guide through the remarkable diversity of cross-cutting approaches and tools in the gigantic puzzle of the SOLAS realm. Here we overview the existing prime components of atmospheric and oceanic observing systems, with the acquisition of ocean–atmosphere observables either from in situ or from satellites, the rich hierarchy of models to test our knowledge of Earth System functioning, and the tremendous efforts accomplished over the last decade within the COST Action 735 and SOLAS Integration project frameworks to understand, as best we can, the current physical and biogeochemical state of the atmosphere and ocean commons. A few SOLAS integrative studies illustrate the full meaning of interactions, paving the way for even tighter connections between thematic fields. Ultimately, SOLAS research will also develop with an enhanced consideration of societal demand while preserving fundamental research coherency. The exchange of energy, gases and particles across the air-sea interface is controlled by a variety of biological, chemical and physical processes that operate across broad spatial and temporal scales. These processes influence the composition, biogeochemical and chemical properties of both the oceanic and atmospheric boundary layers and ultimately shape the Earth system response to climate and environmental change, as detailed in the previous four chapters. In this cross-cutting chapter we present some of the SOLAS achievements over the last decade in terms of integration, upscaling observational information from process-oriented studies and expeditionary research with key tools such as remote sensing and modelling. Here we do not pretend to encompass the entire legacy of SOLAS efforts but rather offer a selective view of some of the major integrative SOLAS studies that combined available pieces of the immense jigsaw puzzle. These include, for instance, COST efforts to build up global climatologies of SOLAS relevant parameters such as dimethyl sulphide, interconnection between volcanic ash and ecosystem response in the eastern subarctic North Pacific, optimal strategy to derive basin-scale CO2 uptake with good precision, or significant reduction of the uncertainties in sea-salt aerosol source functions. Predicting the future trajectory of Earth’s climate and habitability is the main task ahead. Some possible routes for the SOLAS scientific community to reach this overarching goal conclude the chapter

    Personalization of medicine requires better observational evidence

    No full text
    Rutger A Middelburg,1,2 M Sesmu Arbous,2,3 Judith G Middelburg,4 Johanna G van der Bom1,2 1Center for Clinical Transfusion Research, Sanquin Research, Leiden, the Netherlands; 2Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, the Netherlands; 3Department of Intensive Care Medicine, Leiden University Medical Center, Leiden, the Netherlands; 4Department of Radiation Oncology, Erasmus University Medical Center, Rotterdam, the Netherlands Abstract: Evidence-based medicine has become associated with a preference for randomized trials. Randomization is a powerful tool against both known and unknown confounding. However, due to cost-induced constraints in size, randomized trials are seldom able to provide the subgroup analyses needed to gain much insight into effect modification. To apply results to an individual patient, effect modification needs to be considered. Results from randomized trials are therefore often difficult to apply in daily clinical practice. Confounding by indication, which randomization aims to prevent, is caused by more severely ill patients being less or more likely to be treated. Therefore, the prognostic indicators that physicians use to make treatment decisions become confounders. However, these same prognostic indicators are also effect modifiers. This is in fact exactly why they are relevant to decision-making. We use simple, fictive numerical examples to illustrate these concepts. Then we argue that if we would record all relevant variables, it would simultaneously solve the problem of confounding by indication and allow quantification of effect modification. It has previously been argued that it is practically more feasible to “simply” randomize treatment allocation, than to adequately correct for confounding by indication. We will argue that, in the current age of evidence-based medicine and highly regulated randomized trials, this balance has shifted. We therefore call for better observational clinical research. However, careless acceptance of results from poorly performed observational research can lead clinicians seriously astray. Therefore, a more interactive approach toward the medical literature might be needed, where more room is made for scientific discussion and interpretation of results, instead of one-way reporting. Keywords: treatment, personalized, effectiveness, effect modification, risk factors, confounding by indicatio

    Thrombocytopenia and bleeding in myelosuppressed transfusion-dependent patients: a simulation study exploring underlying mechanisms

    No full text
    Rutger A Middelburg,1,2 Jean-Louis H Kerkhoffs,1,3 Johanna G van der Bom1,2 1Center for Clinical Transfusion Research, Sanquin Research, Leiden, the Netherlands; 2Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, the Netherlands; 3Department of Hematology, Hagaziekenhuis, Den Haag, the Netherlands Background: Hematology–oncology patients often become severely thrombocytopenic and receive prophylactic platelet transfusions when their platelet count drops below 10×109 platelets/L. This so-called “platelet count trigger” of 10×109 platelets/L is recommended because currently available evidence suggests this is the critical concentration at which bleeding risk starts to increase. Yet, exposure time and lag time may have biased the results of studies on the association between platelet counts and bleeding risks.Methods: We performed simulation studies to examine possible effects of exposure time and lag time on the findings of both randomized trials and observational data.Results: Exposure time and lag time reduced or even reversed the association between the risk of clinically relevant bleeding and platelet counts. The frequency of platelet count measurements influenced the observed bleeding risk at a given platelet count trigger. A transfusion trigger of 10×109 platelets/L resulted in a severely distorted association, which closely resembled the association reported in the literature. At triggers of 0, 5, 10, and 20×109 platelets/L the observed percentages of patients experiencing bleeding were 18, 19, 19, and 18%. A trigger of 30×109 platelets/L showed an observed bleeding risk of 16% and triggers of 40 and 50×109 platelets/L both resulted in observed bleeding risks of 13%.Conclusion: The results from our simulation study show how minimal exposure times and lag times may have influenced the results from previous studies on platelet counts, transfusion strategies, and bleeding risk and caution against the generally recommended universal trigger of 10×109 platelets/L. Keywords: platelet transfusions, platelet counts, bleeding, simulation study, lag time, exposure tim
    • 

    corecore