669 research outputs found

    The impact of inter‐flood duration on non‐cohesive sediment bed stability

    Get PDF
    © 2019 John Wiley & Sons, Ltd. Limited field and flume data suggests that both uniform and graded beds appear to progressively stabilize when subjected to inter-flood flows as characterized by the absence of active bedload transport. Previous work has shown that the degree of bed stabilization scales with duration of inter-flood flow, however, the sensitivity of this response to bed surface grain size distribution has not been explored. This article presents the first detailed comparison of the dependence of graded bed stability on inter-flood flow duration. Sixty discrete experiments, including repetitions, were undertaken using three grain size distributions of identical D50 (4.8 mm); near-uniform (σg = 1.13), unimodal (σg = 1.63) and bimodal (σg = 2.08). Each bed was conditioned for between 0 (benchmark) and 960 minutes by an antecedent shear stress below the entrainment threshold of the bed (τ*c50). The degree of bed stabilization was determined by measuring changes to critical entrainment thresholds and bedload flux characteristics. Results show that (i) increasing inter-flood duration from 0 to 960 minutes increases the average threshold shear stress of the D50 by up to 18%; (ii) bedload transport rates were reduced by up to 90% as inter-flood duration increased from 0 to 960 minutes; (iii) the rate of response to changes in inter-flood duration in both critical shear stress and bedload transport rate is non-linear and is inversely proportional to antecedent duration; (iv) there is a grade dependent response to changes in critical shear stress where the magnitude of response in uniform beds is up to twice that of the graded beds; and (v) there is a grade dependent response to changes in bedload transport rate where the bimodal bed is most responsive in terms of the magnitude of change. These advances underpin the development of more accurate predictions of both entrainment thresholds and bedload flux timing and magnitude, as well as having implications for the management of environmental flow design. © 2019 John Wiley & Sons, Ltd. © 2019 John Wiley & Sons, Ltd

    Thermal energy transfer around buried pipe infrastructure

    Get PDF
    Decarbonisation of heating is essential to meet national and international greenhouse gas emissions targets. This will require adoption of a range of solutions including ground source heat pump and district heating technologies. A novel route to these solutions includes dual use of buried infrastructure for heat transfer and storage in addition to its primary function. Water supply and wastewater collection pipes may be well suited for thermal energy applications being present in all urban areas in networks already in proximity to heat users. However, greater understanding of their potential interactions with surrounding heat sources and sinks is required before full assessment of the energy potential of such buried pipe networks can be obtained. This paper presents an investigation into the thermal interactions associated with shallow, buried water filled pipes. Using the results of large scale experiments and numerical simulation it is shown that soil surface ambient conditions and adjacent pipes can both act as sources or sinks of heat. While conduction is the main mechanism of heat transfer in the soil directly surrounding any pipe, any adjacent water filled pipes may lead to convection becoming important locally. In the test case, the thermal sphere of influence of the water filled pipe was also shown to be large, at in excess of 4 m over a timescale of 4 months. Taken together, these points suggest that design and analysis approaches when using water supply and wastewater collection networks for heat exchange and storage need careful consideration of environmental interactions, heat losses and gains to adjacent pipes or other infrastructure, and in ground conditions for a number of pipe diameters from any buried pipe

    Influence of short and long term processes on SAR11 communities in open ocean and coastal systems

    Get PDF
    SAR11 bacteria dominate the surface ocean and are major players in converting fixed carbon back to atmospheric carbon dioxide. The SAR11 clade is comprised of niche-specialized ecotypes that display distinctive spatiotemporal transitions. We analyzed SAR11 ecotype seasonality in two long-term 16S rRNA amplicon time series representing different North Atlantic regimes: the Sargasso Sea (subtropical ocean-gyre; BATS) and the temperate coastal Western English Channel (WEC). Using phylogenetically resolved amplicon sequence variants (ASVs), we evaluated seasonal environmental constraints on SAR11 ecotype periodicity. Despite large differences in temperature and nutrient availability between the two sites, at both SAR11 succession was defined by summer and winter clusters of ASVs. The summer cluster was dominated by ecotype Ia.3 in both sites. Winter clusters were dominated by ecotypes Ib and IIa.A at BATS and Ia.1 and IIa.B at WEC. A 2-year weekly analysis within the WEC time series showed that the response of SAR11 communities to short-term environmental fluctuations was variable. In 2016, community shifts were abrupt and synchronized to environmental shifts. However, in 2015, changes were gradual and decoupled from environmental fluctuations, likely due to increased mixing from strong winds. We demonstrate that interannual weather variability disturb the pace of SAR11 seasonal progression

    The contribution of diet and genotype to iron status in women:a classical twin study

    Get PDF
    This is the first published report examining the combined effect of diet and genotype on body iron content using a classical twin study design. The aim of this study was to determine the relative contribution of genetic and environmental factors in determining iron status. The population was comprised of 200 BMI- and age-matched pairs of MZ and DZ healthy twins, characterised for habitual diet and 15 iron-related candidate genetic markers. Variance components analysis demonstrated that the heritability of serum ferritin (SF) and soluble transferrin receptor was 44% and 54% respectively. Measured single nucleotide polymorphisms explained 5% and selected dietary factors 6% of the variance in iron status; there was a negative association between calcium intake and body iron (p = 0.02) and SF (p = 0.04)

    Dietary Iron bioavailability: A simple model that can be used to derive country-specific dietary reference values for adult men and women

    Get PDF
    Background: Reference intakes for iron are derived from physiological requirements, with an assumed value for dietary iron absorption. A new approach to estimate iron bioavailability, calculated from iron intake, status, and requirements was used to set European dietary reference values, but the values obtained cannot be used for low- and middle-income countries where diets are very different. Objective: We aimed to test the feasibility of using the model developed from United Kingdom and Irish data to derive a value for dietary iron bioavailability in an African country, using data collected from women of child-bearing age in Benin. We also compared the effect of using estimates of iron losses made in the 1960s with more recent data for whole body iron losses. Methods: Dietary iron intake and serum ferritin (SF), together with physiological requirements of iron, were entered into the predictive model to estimate percentage iron absorption from the diet at different levels of iron status. Results: The results obtained from the 2 different methods for calculating physiological iron requirements were similar, except at low SF concentrations. At a SF value of 30 ”g/L predicted iron absorption from the African maize-based diet was 6%, compared with 18% from a Western diet, and it remained low until the SF fell below 25 ”g/L. Conclusions: We used the model to estimate percentage dietary iron absorption in 30 Beninese women. The predicted values agreed with results from earlier single meal isotope studies; therefore, we conclude that the model has potential for estimating dietary iron bioavailability in men and nonpregnant women consuming different diets in other countries

    A major genetic locus in <i>Trypanosoma brucei</i> is a determinant of host pathology

    Get PDF
    The progression and variation of pathology during infections can be due to components from both host or pathogen, and/or the interaction between them. The influence of host genetic variation on disease pathology during infections with trypanosomes has been well studied in recent years, but the role of parasite genetic variation has not been extensively studied. We have shown that there is parasite strain-specific variation in the level of splenomegaly and hepatomegaly in infected mice and used a forward genetic approach to identify the parasite loci that determine this variation. This approach allowed us to dissect and identify the parasite loci that determine the complex phenotypes induced by infection. Using the available trypanosome genetic map, a major quantitative trait locus (QTL) was identified on T. brucei chromosome 3 (LOD = 7.2) that accounted for approximately two thirds of the variance observed in each of two correlated phenotypes, splenomegaly and hepatomegaly, in the infected mice (named &lt;i&gt;TbOrg1&lt;/i&gt;). In addition, a second locus was identified that contributed to splenomegaly, hepatomegaly and reticulocytosis (&lt;i&gt;TbOrg2&lt;/i&gt;). This is the first use of quantitative trait locus mapping in a diploid protozoan and shows that there are trypanosome genes that directly contribute to the progression of pathology during infections and, therefore, that parasite genetic variation can be a critical factor in disease outcome. The identification of parasite loci is a first step towards identifying the genes that are responsible for these important traits and shows the power of genetic analysis as a tool for dissecting complex quantitative phenotypic traits

    Downregulation of Mcl-1 has anti-inflammatory pro-resolution effects and enhances bacterial clearance from the lung

    Get PDF
    Phagocytes not only coordinate acute inflammation and host defense at mucosal sites, but also contribute to tissue damage. Respiratory infection causes a globally significant disease burden and frequently progresses to acute respiratory distress syndrome, a devastating inflammatory condition characterized by neutrophil recruitment and accumulation of protein-rich edema fluid causing impaired lung function. We hypothesized that targeting the intracellular protein myeloid cell leukemia 1 (Mcl-1) by a cyclin-dependent kinase inhibitor (AT7519) or a flavone (wogonin) would accelerate neutrophil apoptosis and resolution of established inflammation, but without detriment to bacterial clearance. Mcl-1 loss induced human neutrophil apoptosis, but did not induce macrophage apoptosis nor impair phagocytosis of apoptotic neutrophils. Neutrophil-dominant inflammation was modelled in mice by either endotoxin or bacteria (Escherichia coli). Downregulating inflammatory cell Mcl-1 had anti-inflammatory, pro-resolution effects, shortening the resolution interval (R(i)) from 19 to 7 h and improved organ dysfunction with enhanced alveolar–capillary barrier integrity. Conversely, attenuating drug-induced Mcl-1 downregulation inhibited neutrophil apoptosis and delayed resolution of endotoxin-mediated lung inflammation. Importantly, manipulating lung inflammatory cell Mcl-1 also accelerated resolution of bacterial infection (R(i); 50 to 16 h) concurrent with enhanced bacterial clearance. Therefore, manipulating inflammatory cell Mcl-1 accelerates inflammation resolution without detriment to host defense against bacteria, and represents a target for treating infection-associated inflammation

    Scientific opinion on health benefits of seafood (fish and shellfish) consumption in relation to health risks associated with exposure to methylmercury

    Get PDF
    Following a request from the European Commission to address the risks and benefits as regards fish/seafood consumption related to relevant beneficial substances (e.g. nutrients such as n-3 long-chain polyunsaturated fatty acids) and the contaminant methylmercury, the Panel on Dietetic Products, Nutrition and Allergies (NDA) was asked to deliver a Scientific Opinion on health benefits of seafood consumption in relation to health risks associated with exposure to methylmercury. In the present Opinion, the NDA Panel has reviewed the role of seafood in European diets and evaluated the beneficial effects of seafood consumption in relation to health outcomes and population subgroups that have been identified by the FAO/WHO Joint Expert Consultation on the Risks and Benefits of Fish Consumption and/or the EFSA Panel on Contaminants in the context of a risk assessment related to the presence of mercury and methylmercury in food as relevant for the assessment. These included the effects of seafood consumption during pregnancy on functional outcomes of children\u2019s neurodevelopment and the effects of seafood consumption on cardiovascular disease risk in adults. The Panel concluded that consumption of about 1-2 servings of seafood per week and up to 3-4 servings per week during pregnancy has been associated with better functional outcomes of neurodevelopment in children compared to no consumption of seafood. Such amounts have also been associated with a lower risk of coronary heart disease mortality in adults and are compatible with current intakes and recommendations in most of the European countries considered. These associations refer to seafood per se and include beneficial and adverse effects of nutrients and non-nutrients (i.e. including contaminants such as methylmercury) contained in seafood. No additional benefits on neurodevelopmental outcomes and no benefit on coronary heart disease mortality risk might be expected at higher intakes

    Essential versus accessory aspects of cell death: recommendations of the NCCD 2015

    Get PDF
    Cells exposed to extreme physicochemical or mechanical stimuli die in an uncontrollable manner, as a result of their immediate structural breakdown. Such an unavoidable variant of cellular demise is generally referred to as ‘accidental cell death’ (ACD). In most settings, however, cell death is initiated by a genetically encoded apparatus, correlating with the fact that its course can be altered by pharmacologic or genetic interventions. ‘Regulated cell death’ (RCD) can occur as part of physiologic programs or can be activated once adaptive responses to perturbations of the extracellular or intracellular microenvironment fail. The biochemical phenomena that accompany RCD may be harnessed to classify it into a few subtypes, which often (but not always) exhibit stereotyped morphologic features. Nonetheless, efficiently inhibiting the processes that are commonly thought to cause RCD, such as the activation of executioner caspases in the course of apoptosis, does not exert true cytoprotective effects in the mammalian system, but simply alters the kinetics of cellular demise as it shifts its morphologic and biochemical correlates. Conversely, bona fide cytoprotection can be achieved by inhibiting the transduction of lethal signals in the early phases of the process, when adaptive responses are still operational. Thus, the mechanisms that truly execute RCD may be less understood, less inhibitable and perhaps more homogeneous than previously thought. Here, the Nomenclature Committee on Cell Death formulates a set of recommendations to help scientists and researchers to discriminate between essential and accessory aspects of cell death
    • 

    corecore