77 research outputs found

    A global perspective on the trophic geography of sharks

    Get PDF
    Sharks are a diverse group of mobile predators that forage across varied spatial scales and have the potential to influence food web dynamics. The ecological consequences of recent declines in shark biomass may extend across broader geographic ranges if shark taxa display common behavioural traits. By tracking the original site of photosynthetic fixation of carbon atoms that were ultimately assimilated into muscle tissues of 5,394 sharks from 114 species, we identify globally consistent biogeographic traits in trophic interactions between sharks found in different habitats. We show that populations of shelf-dwelling sharks derive a substantial proportion of their carbon from regional pelagic sources, but contain individuals that forage within additional isotopically diverse local food webs, such as those supported by terrestrial plant sources, benthic production and macrophytes. In contrast, oceanic sharks seem to use carbon derived from between 30° and 50° of latitude. Global-scale compilations of stable isotope data combined with biogeochemical modelling generate hypotheses regarding animal behaviours that can be tested with other methodological approaches.This research was conducted as part of C.S.B.’s Ph.D dissertation, which was funded by the University of Southampton and NERC (NE/L50161X/1), and through a NERC Grant-in-Kind from the Life Sciences Mass Spectrometry Facility (LSMSF; EK267-03/16). We thank A. Bates, D. Sims, F. Neat, R. McGill and J. Newton for their analytical contributions and comments on the manuscripts.Peer reviewe

    Erratum to: 36th International Symposium on Intensive Care and Emergency Medicine

    Get PDF
    [This corrects the article DOI: 10.1186/s13054-016-1208-6.]

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Development and Validation of a Risk Score for Chronic Kidney Disease in HIV Infection Using Prospective Cohort Data from the D:A:D Study

    Get PDF
    Ristola M. on työryhmien DAD Study Grp ; Royal Free Hosp Clin Cohort ; INSIGHT Study Grp ; SMART Study Grp ; ESPRIT Study Grp jäsen.Background Chronic kidney disease (CKD) is a major health issue for HIV-positive individuals, associated with increased morbidity and mortality. Development and implementation of a risk score model for CKD would allow comparison of the risks and benefits of adding potentially nephrotoxic antiretrovirals to a treatment regimen and would identify those at greatest risk of CKD. The aims of this study were to develop a simple, externally validated, and widely applicable long-term risk score model for CKD in HIV-positive individuals that can guide decision making in clinical practice. Methods and Findings A total of 17,954 HIV-positive individuals from the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study with >= 3 estimated glomerular filtration rate (eGFR) values after 1 January 2004 were included. Baseline was defined as the first eGFR > 60 ml/min/1.73 m2 after 1 January 2004; individuals with exposure to tenofovir, atazanavir, atazanavir/ritonavir, lopinavir/ritonavir, other boosted protease inhibitors before baseline were excluded. CKD was defined as confirmed (>3 mo apart) eGFR In the D:A:D study, 641 individuals developed CKD during 103,185 person-years of follow-up (PYFU; incidence 6.2/1,000 PYFU, 95% CI 5.7-6.7; median follow-up 6.1 y, range 0.3-9.1 y). Older age, intravenous drug use, hepatitis C coinfection, lower baseline eGFR, female gender, lower CD4 count nadir, hypertension, diabetes, and cardiovascular disease (CVD) predicted CKD. The adjusted incidence rate ratios of these nine categorical variables were scaled and summed to create the risk score. The median risk score at baseline was -2 (interquartile range -4 to 2). There was a 1: 393 chance of developing CKD in the next 5 y in the low risk group (risk score = 5, 505 events), respectively. Number needed to harm (NNTH) at 5 y when starting unboosted atazanavir or lopinavir/ritonavir among those with a low risk score was 1,702 (95% CI 1,166-3,367); NNTH was 202 (95% CI 159-278) and 21 (95% CI 19-23), respectively, for those with a medium and high risk score. NNTH was 739 (95% CI 506-1462), 88 (95% CI 69-121), and 9 (95% CI 8-10) for those with a low, medium, and high risk score, respectively, starting tenofovir, atazanavir/ritonavir, or another boosted protease inhibitor. The Royal Free Hospital Clinic Cohort included 2,548 individuals, of whom 94 individuals developed CKD (3.7%) during 18,376 PYFU (median follow-up 7.4 y, range 0.3-12.7 y). Of 2,013 individuals included from the SMART/ESPRIT control arms, 32 individuals developed CKD (1.6%) during 8,452 PYFU (median follow-up 4.1 y, range 0.6-8.1 y). External validation showed that the risk score predicted well in these cohorts. Limitations of this study included limited data on race and no information on proteinuria. Conclusions Both traditional and HIV-related risk factors were predictive of CKD. These factors were used to develop a risk score for CKD in HIV infection, externally validated, that has direct clinical relevance for patients and clinicians to weigh the benefits of certain antiretrovirals against the risk of CKD and to identify those at greatest risk of CKD.Peer reviewe

    Iron deficiency and risk factors for lower iron stores in 6-24-month-old New Zealanders.

    No full text
    OBJECTIVES: To determine the prevalence of biochemical iron deficiency and identify factors associated with ferritin levels among 6-24-month-old urban South Island New Zealand children. DESIGN: Cross-sectional survey conducted from May 1998 to March 1999. SETTING: The cities of Christchurch, Dunedin and Invercargill. SUBJECTS: A total of 323 randomly selected 6-24-month-old children participated (response rate 61%) of which 263 provided a blood sample. METHODS: A complete blood cell count, zinc protoporphyrin, serum ferritin and C-reactive protein were measured on nonfasting venipuncture blood samples, 3-day weighed food records and general questionnaire data were collected. RESULTS: Among children with C-reactive proteinboys), ethnicity (Caucasian>non-Caucasian), weight-for-age percentiles (negative) and birth weight (positive) were associated with ferritin after adjusting for infection and socioeconomic status. When current consumption of iron fortified formula and >500 ml of cows' milk per day were included, these were associated with a 22% increase and 25% decrease in ferritin, respectively (R2=0.28). CONCLUSIONS: The presence of suboptimal iron status (29%) among young New Zealand children is cause for concern, even though severe iron deficiency is rare, because children with marginal iron status are at risk of developing severe iron deficiency if exposed to a physiological challenge
    corecore