35 research outputs found
Natural and experimental evolution of sexual conflict within Caenorhabditis nematodes
BACKGROUND: Although males and females need one another in order to reproduce, they often have different reproductive interests, which can lead to conflict between the sexes. The intensity and frequency of male-male competition for fertilization opportunities is thought to be an important contributor to this conflict. The nematode genus Caenorhabditis provides an opportunity to test this hypothesis because the frequency of males varies widely among species with different mating systems. RESULTS: We find evidence that there is strong inter- and intra-sexual conflict within C. remanei, a dioecious species composed of equal frequencies of males and females. In particular, some C. remanei males greatly reduce female lifespan following mating, and their sperm have a strong competitive advantage over the sperm of other males. In contrast, our results suggest that both types of conflict have been greatly reduced within C. elegans, which is an androdioecious species that is composed of self-fertilizing hermaphrodites and rare males. Using experimental evolution in mutant C. elegans populations in which sperm production is blocked in hermaphrodites (effectively converting them to females), we find that the consequences of sexual conflict observed within C. remanei evolve rapidly within C. elegans populations experiencing high levels of male-male competition. CONCLUSIONS: Together, these complementary data sets support the hypothesis that the intensity of intersexual conflict varies with the intensity of competition among males, and that male-induced collateral damage to mates can evolve very rapidly within populations
Recommended from our members
Effect of Hydrocortisone on Mortality and Organ Support in Patients With Severe COVID-19: The REMAP-CAP COVID-19 Corticosteroid Domain Randomized Clinical Trial.
Importance: Evidence regarding corticosteroid use for severe coronavirus disease 2019 (COVID-19) is limited. Objective: To determine whether hydrocortisone improves outcome for patients with severe COVID-19. Design, Setting, and Participants: An ongoing adaptive platform trial testing multiple interventions within multiple therapeutic domains, for example, antiviral agents, corticosteroids, or immunoglobulin. Between March 9 and June 17, 2020, 614 adult patients with suspected or confirmed COVID-19 were enrolled and randomized within at least 1 domain following admission to an intensive care unit (ICU) for respiratory or cardiovascular organ support at 121 sites in 8 countries. Of these, 403 were randomized to open-label interventions within the corticosteroid domain. The domain was halted after results from another trial were released. Follow-up ended August 12, 2020. Interventions: The corticosteroid domain randomized participants to a fixed 7-day course of intravenous hydrocortisone (50 mg or 100 mg every 6 hours) (n = 143), a shock-dependent course (50 mg every 6 hours when shock was clinically evident) (n = 152), or no hydrocortisone (n = 108). Main Outcomes and Measures: The primary end point was organ support-free days (days alive and free of ICU-based respiratory or cardiovascular support) within 21 days, where patients who died were assigned -1 day. The primary analysis was a bayesian cumulative logistic model that included all patients enrolled with severe COVID-19, adjusting for age, sex, site, region, time, assignment to interventions within other domains, and domain and intervention eligibility. Superiority was defined as the posterior probability of an odds ratio greater than 1 (threshold for trial conclusion of superiority >99%). Results: After excluding 19 participants who withdrew consent, there were 384 patients (mean age, 60 years; 29% female) randomized to the fixed-dose (n = 137), shock-dependent (n = 146), and no (n = 101) hydrocortisone groups; 379 (99%) completed the study and were included in the analysis. The mean age for the 3 groups ranged between 59.5 and 60.4 years; most patients were male (range, 70.6%-71.5%); mean body mass index ranged between 29.7 and 30.9; and patients receiving mechanical ventilation ranged between 50.0% and 63.5%. For the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively, the median organ support-free days were 0 (IQR, -1 to 15), 0 (IQR, -1 to 13), and 0 (-1 to 11) days (composed of 30%, 26%, and 33% mortality rates and 11.5, 9.5, and 6 median organ support-free days among survivors). The median adjusted odds ratio and bayesian probability of superiority were 1.43 (95% credible interval, 0.91-2.27) and 93% for fixed-dose hydrocortisone, respectively, and were 1.22 (95% credible interval, 0.76-1.94) and 80% for shock-dependent hydrocortisone compared with no hydrocortisone. Serious adverse events were reported in 4 (3%), 5 (3%), and 1 (1%) patients in the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively. Conclusions and Relevance: Among patients with severe COVID-19, treatment with a 7-day fixed-dose course of hydrocortisone or shock-dependent dosing of hydrocortisone, compared with no hydrocortisone, resulted in 93% and 80% probabilities of superiority with regard to the odds of improvement in organ support-free days within 21 days. However, the trial was stopped early and no treatment strategy met prespecified criteria for statistical superiority, precluding definitive conclusions. Trial Registration: ClinicalTrials.gov Identifier: NCT02735707
Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19
IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19.
Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19.
DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022).
INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days.
MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes.
RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively).
CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes.
TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
Recommended from our members
Global burden of 288 causes of death and life expectancy decomposition in 204 countries and territories and 811 subnational locations, 1990–2021: a systematic analysis for the Global Burden of Disease Study 2021
BACKGROUND Regular, detailed reporting on population health by underlying cause of death is fundamental for public health decision making. Cause-specific estimates of mortality and the subsequent effects on life expectancy worldwide are valuable metrics to gauge progress in reducing mortality rates. These estimates are particularly important following large-scale mortality spikes, such as the COVID-19 pandemic. When systematically analysed, mortality rates and life expectancy allow comparisons of the consequences of causes of death globally and over time, providing a nuanced understanding of the effect of these causes on global populations. METHODS The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2021 cause-of-death analysis estimated mortality and years of life lost (YLLs) from 288 causes of death by age-sex-location-year in 204 countries and territories and 811 subnational locations for each year from 1990 until 2021. The analysis used 56 604 data sources, including data from vital registration and verbal autopsy as well as surveys, censuses, surveillance systems, and cancer registries, among others. As with previous GBD rounds, cause-specific death rates for most causes were estimated using the Cause of Death Ensemble model-a modelling tool developed for GBD to assess the out-of-sample predictive validity of different statistical models and covariate permutations and combine those results to produce cause-specific mortality estimates-with alternative strategies adapted to model causes with insufficient data, substantial changes in reporting over the study period, or unusual epidemiology. YLLs were computed as the product of the number of deaths for each cause-age-sex-location-year and the standard life expectancy at each age. As part of the modelling process, uncertainty intervals (UIs) were generated using the 2·5th and 97·5th percentiles from a 1000-draw distribution for each metric. We decomposed life expectancy by cause of death, location, and year to show cause-specific effects on life expectancy from 1990 to 2021. We also used the coefficient of variation and the fraction of population affected by 90% of deaths to highlight concentrations of mortality. Findings are reported in counts and age-standardised rates. Methodological improvements for cause-of-death estimates in GBD 2021 include the expansion of under-5-years age group to include four new age groups, enhanced methods to account for stochastic variation of sparse data, and the inclusion of COVID-19 and other pandemic-related mortality-which includes excess mortality associated with the pandemic, excluding COVID-19, lower respiratory infections, measles, malaria, and pertussis. For this analysis, 199 new country-years of vital registration cause-of-death data, 5 country-years of surveillance data, 21 country-years of verbal autopsy data, and 94 country-years of other data types were added to those used in previous GBD rounds. FINDINGS The leading causes of age-standardised deaths globally were the same in 2019 as they were in 1990; in descending order, these were, ischaemic heart disease, stroke, chronic obstructive pulmonary disease, and lower respiratory infections. In 2021, however, COVID-19 replaced stroke as the second-leading age-standardised cause of death, with 94·0 deaths (95% UI 89·2-100·0) per 100 000 population. The COVID-19 pandemic shifted the rankings of the leading five causes, lowering stroke to the third-leading and chronic obstructive pulmonary disease to the fourth-leading position. In 2021, the highest age-standardised death rates from COVID-19 occurred in sub-Saharan Africa (271·0 deaths [250·1-290·7] per 100 000 population) and Latin America and the Caribbean (195·4 deaths [182·1-211·4] per 100 000 population). The lowest age-standardised death rates from COVID-19 were in the high-income super-region (48·1 deaths [47·4-48·8] per 100 000 population) and southeast Asia, east Asia, and Oceania (23·2 deaths [16·3-37·2] per 100 000 population). Globally, life expectancy steadily improved between 1990 and 2019 for 18 of the 22 investigated causes. Decomposition of global and regional life expectancy showed the positive effect that reductions in deaths from enteric infections, lower respiratory infections, stroke, and neonatal deaths, among others have contributed to improved survival over the study period. However, a net reduction of 1·6 years occurred in global life expectancy between 2019 and 2021, primarily due to increased death rates from COVID-19 and other pandemic-related mortality. Life expectancy was highly variable between super-regions over the study period, with southeast Asia, east Asia, and Oceania gaining 8·3 years (6·7-9·9) overall, while having the smallest reduction in life expectancy due to COVID-19 (0·4 years). The largest reduction in life expectancy due to COVID-19 occurred in Latin America and the Caribbean (3·6 years). Additionally, 53 of the 288 causes of death were highly concentrated in locations with less than 50% of the global population as of 2021, and these causes of death became progressively more concentrated since 1990, when only 44 causes showed this pattern. The concentration phenomenon is discussed heuristically with respect to enteric and lower respiratory infections, malaria, HIV/AIDS, neonatal disorders, tuberculosis, and measles. INTERPRETATION Long-standing gains in life expectancy and reductions in many of the leading causes of death have been disrupted by the COVID-19 pandemic, the adverse effects of which were spread unevenly among populations. Despite the pandemic, there has been continued progress in combatting several notable causes of death, leading to improved global life expectancy over the study period. Each of the seven GBD super-regions showed an overall improvement from 1990 and 2021, obscuring the negative effect in the years of the pandemic. Additionally, our findings regarding regional variation in causes of death driving increases in life expectancy hold clear policy utility. Analyses of shifting mortality trends reveal that several causes, once widespread globally, are now increasingly concentrated geographically. These changes in mortality concentration, alongside further investigation of changing risks, interventions, and relevant policy, present an important opportunity to deepen our understanding of mortality-reduction strategies. Examining patterns in mortality concentration might reveal areas where successful public health interventions have been implemented. Translating these successes to locations where certain causes of death remain entrenched can inform policies that work to improve life expectancy for people everywhere. FUNDING Bill & Melinda Gates Foundation
Recommended from our members
Extensive variation, but not local adaptation in an Australian alpine daisy.
Alpine plants often occupy diverse habitats within a similar elevation range, but most research on local adaptation in these plants has focused on elevation gradients. In testing for habitat-related local adaptation, local effects on seed quality and initial plant growth should be considered in designs that encompass multiple populations and habitats. We tested for local adaptation across alpine habitats in a morphologically variable daisy species, Brachyscome decipiens, in the Bogong High Plains in Victoria, Australia. We collected seed from different habitats, controlled for maternal effects through initial seed size estimates, and characterized seedling survival and growth in a field transplant experiment. We found little evidence for local adaptation for survival or plant size, based on three adaptation measures: Home versus Away, Local versus Foreign, and Sympatric versus Allopatric (SA). The SA measure controlled for planting site and population (site-of-origin) effects. There were significant differences due to site-of-origin and planting site effects. An important confounding factor was the size of plants directly after transplantation of seedlings, which had a large impact on subsequent seedling survival and growth. Initial differences in plant width and height influenced subsequent survival across the growing season but in opposing directions: wide plants had higher survival, but tall plants had lower survival. In an additional controlled garden experiment at Cranbourne Royal Botanic Gardens, site-of-origin effects detected in the field experiments disappeared under more benign homogeneous conditions. Although B. decipiens from different source areas varied significantly when grown across a range of alpine habitats, these differences did not translate into a local or habitat-related fitness advantage. This lack of local advantage may signal weak past selection, and/or weak adaptive transgeneration (plasticity) effects
Extensive variation, but not local adaptation in an Australian alpine daisy.
Alpine plants often occupy diverse habitats within a similar elevation range, but most research on local adaptation in these plants has focused on elevation gradients. In testing for habitat-related local adaptation, local effects on seed quality and initial plant growth should be considered in designs that encompass multiple populations and habitats. We tested for local adaptation across alpine habitats in a morphologically variable daisy species, Brachyscome decipiens, in the Bogong High Plains in Victoria, Australia. We collected seed from different habitats, controlled for maternal effects through initial seed size estimates, and characterized seedling survival and growth in a field transplant experiment. We found little evidence for local adaptation for survival or plant size, based on three adaptation measures: Home versus Away, Local versus Foreign, and Sympatric versus Allopatric (SA). The SA measure controlled for planting site and population (site-of-origin) effects. There were significant differences due to site-of-origin and planting site effects. An important confounding factor was the size of plants directly after transplantation of seedlings, which had a large impact on subsequent seedling survival and growth. Initial differences in plant width and height influenced subsequent survival across the growing season but in opposing directions: wide plants had higher survival, but tall plants had lower survival. In an additional controlled garden experiment at Cranbourne Royal Botanic Gardens, site-of-origin effects detected in the field experiments disappeared under more benign homogeneous conditions. Although B. decipiens from different source areas varied significantly when grown across a range of alpine habitats, these differences did not translate into a local or habitat-related fitness advantage. This lack of local advantage may signal weak past selection, and/or weak adaptive transgeneration (plasticity) effects
Data from: Testing the niche breadth-range size hypothesis: habitat specialization versus performance in Australian alpine daisies
Relatively common species within a clade are expected to perform well across a wider range of conditions than their rarer relatives, yet experimental tests of this “niche breadth—range size” hypothesis remain surprisingly scarce. Rarity may arise due to trade-offs between specialization and performance across a wide range of environments. Here we use common garden and reciprocal transplant experiments to test the niche breadth—range size hypothesis, focusing on four common and three rare endemic alpine daisies (Brachyscome spp.) from the Australian Alps. We used three experimental contexts: 1) alpine reciprocal seedling experiment: a test of seedling survival and growth in three alpine habitat types differing in environmental quality and species diversity, 2) warm environment common garden: a test of whether common daisy species have higher growth rates and phenotypic plasticity, assessed in a common garden in a warmer climate and run simultaneously with experiment 1, and 3) alpine reciprocal seed experiment: a test of seed germination capacity and viability in the same three alpine habitat types as in experiment 1. In the alpine reciprocal seedling experiment, survival of all species was highest in the open heathland habitat where overall plant diversity is high, suggesting a general, positive response to a relatively productive, low-stress environment. We found only partial support for higher survival of rare species in their habitats of origin. In the warm environment common garden, three common daisies exhibited greater growth and biomass than two rare species, but the other rare species performed as well as the common species. In the alpine reciprocal seed experiment, common daisies exhibited higher germination across most habitats, but rare species maintained a higher proportion of viable seed in all conditions, suggesting different life history strategies. These results indicate that some but not all rare, alpine endemics exhibit stress tolerance at the cost of reduced growth rates in low-stress environments compared to common species. Finally, these findings suggest the seed stage is important in the persistence of rare species, and they provide only weak support at the seedling stage for the niche breadth-range size hypothesis
Recommended from our members
Testing the niche-breadth-range-size hypothesis: habitat specialization vs. performance in Australian alpine daisies.
Relatively common species within a clade are expected to perform well across a wider range of conditions than their rarer relatives, yet experimental tests of this "niche-breadth-range-size" hypothesis remain surprisingly scarce. Rarity may arise due to trade-offs between specialization and performance across a wide range of environments. Here we use common garden and reciprocal transplant experiments to test the niche-breadth-range-size hypothesis, focusing on four common and three rare endemic alpine daisies (Brachyscome spp.) from the Australian Alps. We used three experimental contexts: (1) alpine reciprocal seedling experiment, a test of seedling survival and growth in three alpine habitat types differing in environmental quality and species diversity; (2) warm environment common garden, a test of whether common daisy species have higher growth rates and phenotypic plasticity, assessed in a common garden in a warmer climate and run simultaneously with experiment 1; and (3) alpine reciprocal seed experiment, a test of seed germination capacity and viability in the same three alpine habitat types as in experiment 1. In the alpine reciprocal seedling experiment, survival of all species was highest in the open heathland habitat where overall plant diversity is high, suggesting a general, positive response to a relatively productive, low-stress environment. We found only partial support for higher survival of rare species in their habitats of origin. In the warm environment common garden, three common daisies exhibited greater growth and biomass than two rare species, but the other rare species performed as well as the common species. In the alpine reciprocal seed experiment, common daisies exhibited higher germination across most habitats, but rare species maintained a higher proportion of viable seed in all conditions, suggesting different life history strategies. These results indicate that some but not all rare, alpine endemics exhibit stress tolerance at the cost of reduced growth rates in low-stress environments compared to common species. Finally, these findings suggest the seed stage is important in the persistence of rare species, and they provide only weak support at the seedling stage for the niche-breadth-range-size hypothesis
ECOEVOL_DATA_HIRST
survey data, and experimental dat