106 research outputs found

    TRY plant trait database - enhanced coverage and open access

    Get PDF
    Plant traits-the morphological, anatomical, physiological, biochemical and phenological characteristics of plants-determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait-based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits-almost complete coverage for 'plant growth form'. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait-environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives

    Metal release from contaminated estuarine sediment under pH changes in the marine environment

    Get PDF
    The contaminant release from estuarine sediment due to pH changes was investigated using a modified CEN/TS 14429 pH-dependence leaching test. The test is performed in the range of pH values of 0-14 using deionised water and seawater as leaching solutions. The experimental conditions mimic different circumstances of the marine environment due to the global acidification, carbon dioxide (CO2) leakages from carbon capture and sequestration technologies, and accidental chemical spills in seawater. Leaching test results using seawater as leaching solution show a better neutralisation capacity giving slightly lower metal leaching concentrations than when using deionised water. The contaminated sediment shows a low base-neutralisation capacity (BNCpH 12 = -0.44 eq/kg for deionised water and BNCpH 12 = -1.38 eq/kg for seawater) but a high acid-neutralisation capacity when using deionised water (ANCpH 4 = 3.58 eq/ kg) and seawater (ANCpH 4 = 3.97 eq/kg). Experimental results are modelled with the Visual MINTEQ geochemical software to predict metal release from sediment using both leaching liquids. Surface adsorption to iron- and aluminium- (hydr)oxides was applied for all studied elements. The consideration of the metal-organic matter binding through the NICA-Donnan model and Stockholm Humic Model for lead and copper, respectively, improves the former metal release prediction. Modelled curves can be useful for the environmental impact assessment of seawater acidification due to its match with the experimental values.This work was supported by the Spanish Ministry of Economy and Competitiveness, Project No. CTM 2011-28437-C02-01, ERDF included. M. C. Martı´n-Torre was funded by the Spanish Ministry of Economy and Competitiveness by means of FPI. Fellowship No. BES-2012-053816

    Nitrogen and Carbon Isotopic Dynamics of Subarctic Soils and Plants in Southern Yukon Territory and its Implications for Paleoecological and Paleodietary Studies

    Get PDF
    We examine here the carbon and nitrogen isotopic compositions of bulk soils (8 topsoil and 7 subsoils, including two soil profiles) and five different plant parts of 79 C3 plants from two main functional groups: herbs and shrubs/subshrubs, from 18 different locations in grasslands of southern Yukon Territory, Canada (eastern shoreline of Kluane Lake and Whitehorse area). The Kluane Lake region in particular has been identified previously as an analogue for Late Pleistocene eastern Beringia. All topsoils have higher average total nitrogen δ15N and organic carbon δ13C than plants from the same sites with a positive shift occurring with depth in two soil profiles analyzed. All plants analyzed have an average whole plant δ13C of −27.5 ± 1.2 ‰ and foliar δ13C of ±28.0 ± 1.3 ‰, and average whole plant δ15N of −0.3 ± 2.2 ‰ and foliar δ15N of ±0.6 ± 2.7 ‰. Plants analyzed here showed relatively smaller variability in δ13C than δ15N. Their average δ13C after suitable corrections for the Suess effect should be suitable as baseline for interpreting diets of Late Pleistocene herbivores that lived in eastern Beringia. Water availability, nitrogen availability, spacial differences and intra-plant variability are important controls on δ15N of herbaceous plants in the study area. The wider range of δ15N, the more numerous factors that affect nitrogen isotopic composition and their likely differences in the past, however, limit use of the modern N isotopic baseline for vegetation in paleodietary models for such ecosystems. That said, the positive correlation between foliar δ15N and N content shown for the modern plants could support use of plant δ15N as an index for plant N content and therefore forage quality. The modern N isotopic baseline cannot be applied directly to the past, but it is prerequisite to future efforts to detect shifts in N cycling and forage quality since the Late Pleistocene through comparison with fossil plants from the same region

    Prognostic model to predict postoperative acute kidney injury in patients undergoing major gastrointestinal surgery based on a national prospective observational cohort study.

    Get PDF
    Background: Acute illness, existing co-morbidities and surgical stress response can all contribute to postoperative acute kidney injury (AKI) in patients undergoing major gastrointestinal surgery. The aim of this study was prospectively to develop a pragmatic prognostic model to stratify patients according to risk of developing AKI after major gastrointestinal surgery. Methods: This prospective multicentre cohort study included consecutive adults undergoing elective or emergency gastrointestinal resection, liver resection or stoma reversal in 2-week blocks over a continuous 3-month period. The primary outcome was the rate of AKI within 7 days of surgery. Bootstrap stability was used to select clinically plausible risk factors into the model. Internal model validation was carried out by bootstrap validation. Results: A total of 4544 patients were included across 173 centres in the UK and Ireland. The overall rate of AKI was 14·2 per cent (646 of 4544) and the 30-day mortality rate was 1·8 per cent (84 of 4544). Stage 1 AKI was significantly associated with 30-day mortality (unadjusted odds ratio 7·61, 95 per cent c.i. 4·49 to 12·90; P < 0·001), with increasing odds of death with each AKI stage. Six variables were selected for inclusion in the prognostic model: age, sex, ASA grade, preoperative estimated glomerular filtration rate, planned open surgery and preoperative use of either an angiotensin-converting enzyme inhibitor or an angiotensin receptor blocker. Internal validation demonstrated good model discrimination (c-statistic 0·65). Discussion: Following major gastrointestinal surgery, AKI occurred in one in seven patients. This preoperative prognostic model identified patients at high risk of postoperative AKI. Validation in an independent data set is required to ensure generalizability

    TRY plant trait database - enhanced coverage and open access

    Get PDF
    Plant traits—the morphological, anatomical, physiological, biochemical and phenological characteristics of plants—determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait‐based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits—almost complete coverage for ‘plant growth form’. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait–environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives

    Iron Behaving Badly: Inappropriate Iron Chelation as a Major Contributor to the Aetiology of Vascular and Other Progressive Inflammatory and Degenerative Diseases

    Get PDF
    The production of peroxide and superoxide is an inevitable consequence of aerobic metabolism, and while these particular "reactive oxygen species" (ROSs) can exhibit a number of biological effects, they are not of themselves excessively reactive and thus they are not especially damaging at physiological concentrations. However, their reactions with poorly liganded iron species can lead to the catalytic production of the very reactive and dangerous hydroxyl radical, which is exceptionally damaging, and a major cause of chronic inflammation. We review the considerable and wide-ranging evidence for the involvement of this combination of (su)peroxide and poorly liganded iron in a large number of physiological and indeed pathological processes and inflammatory disorders, especially those involving the progressive degradation of cellular and organismal performance. These diseases share a great many similarities and thus might be considered to have a common cause (i.e. iron-catalysed free radical and especially hydroxyl radical generation). The studies reviewed include those focused on a series of cardiovascular, metabolic and neurological diseases, where iron can be found at the sites of plaques and lesions, as well as studies showing the significance of iron to aging and longevity. The effective chelation of iron by natural or synthetic ligands is thus of major physiological (and potentially therapeutic) importance. As systems properties, we need to recognise that physiological observables have multiple molecular causes, and studying them in isolation leads to inconsistent patterns of apparent causality when it is the simultaneous combination of multiple factors that is responsible. This explains, for instance, the decidedly mixed effects of antioxidants that have been observed, etc...Comment: 159 pages, including 9 Figs and 2184 reference
    corecore