15 research outputs found

    Resource quality determines the evolution of resistance and its genetic basis

    Get PDF
    This is the final version. Available on open access from Wiley via the DOI in this recordData Availability: All the experimental data to support the findings of this study including all virus assay and development data is available at DataDryad. https://doi.org/10.5061/dryad.k98sf7m4g. The complete sequencing data in CRAM format is available from the European Bioinformatics Institute (EBI), under accession number PRJEB27964.Parasites impose strong selection on their hosts, but the level of any evolved resistance may be constrained by the availability of resources. However, studies identifying the genomic basis of such resource‐mediated selection are rare, particularly in non‐model organisms. Here, we investigated the role of nutrition in the evolution of resistance to a DNA virus (PiGV), and any associated trade‐offs in a lepidopteran pest species (Plodia interpunctella). Through selection experiments and whole genome re‐sequencing we identify genetic markers of resistance that vary between the nutritional environments during selection. We do not find consistent evolution of resistance in the presence of virus but rather see substantial variation among replicate populations. Resistance in a low nutrition environment is negatively correlated with growth rate, consistent with an established trade‐off between immunity and development, but this relationship is highly context dependent. Whole genome resequencing of the host shows that resistance mechanisms are likely to be highly polygenic and although the underlying genetic architecture may differ between high and low nutrition environments, similar mechanisms are commonly used. As a whole, our results emphasise the importance of the resource environment on influencing the evolution of resistance.Natural Environment Research Council (NERC)National Institutes of Health (NIH

    A Multisite Preregistered Paradigmatic Test of the Ego-Depletion Effect

    Get PDF
    We conducted a preregistered multilaboratory project (k = 36; N = 3,531) to assess the size and robustness of ego-depletion effects using a novel replication method, termed the paradigmatic replication approach. Each laboratory implemented one of two procedures that was intended to manipulate self-control and tested performance on a subsequent measure of self-control. Confirmatory tests found a nonsignificant result (d = 0.06). Confirmatory Bayesian meta-analyses using an informed-prior hypothesis (ÎŽ = 0.30, SD = 0.15) found that the data were 4 times more likely under the null than the alternative hypothesis. Hence, preregistered analyses did not find evidence for a depletion effect. Exploratory analyses on the full sample (i.e., ignoring exclusion criteria) found a statistically significant effect (d = 0.08); Bayesian analyses showed that the data were about equally likely under the null and informed-prior hypotheses. Exploratory moderator tests suggested that the depletion effect was larger for participants who reported more fatigue but was not moderated by trait self-control, willpower beliefs, or action orientation.</p

    A multi-country test of brief reappraisal interventions on emotions during the COVID-19 pandemic.

    Get PDF
    The COVID-19 pandemic has increased negative emotions and decreased positive emotions globally. Left unchecked, these emotional changes might have a wide array of adverse impacts. To reduce negative emotions and increase positive emotions, we tested the effectiveness of reappraisal, an emotion-regulation strategy that modifies how one thinks about a situation. Participants from 87 countries and regions (n = 21,644) were randomly assigned to one of two brief reappraisal interventions (reconstrual or repurposing) or one of two control conditions (active or passive). Results revealed that both reappraisal interventions (vesus both control conditions) consistently reduced negative emotions and increased positive emotions across different measures. Reconstrual and repurposing interventions had similar effects. Importantly, planned exploratory analyses indicated that reappraisal interventions did not reduce intentions to practice preventive health behaviours. The findings demonstrate the viability of creating scalable, low-cost interventions for use around the world

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Incidence and predictors of complications and mortality in cerebrovascular surgery: National trends from 2007 to 2012

    No full text
    BACKGROUND: Cerebrovascular surgery offers potentially lifesaving treatments for intracranial vascular pathology yet bears substantial risks in the form of perioperative complications and mortality. OBJECTIVE: To better characterize the risks associated with cerebrovascular surgery by broadly investigating the incidence of complications, patient-level predictors of complications, and mortality using the National Surgical Quality Improvement Program database, a prospective, audited, national data set. METHODS: All cerebrovascular cases were extracted from the National Surgical Quality Improvement Program with the use of Current Procedural Terminology codes. Complication and mortality rates were analyzed with univariate and multivariate statistical analyses. RESULTS: A total of 1141 cases were analyzed. The rate of complications was nearly twice that of previous estimates: Almost one-third of patients (30.9%) experienced at least 1 complication, which was significantly associated with 30-day mortality (odds ratio, 7.76; 95% confidence interval, 4.27-14.10; P \u3c .001). Emergency surgery was associated with higher mortality rates (15.1%) than nonemergency procedures (2.3%). Significant predictors of complications included preoperative ventilator dependence, emergency surgery, bleeding disorders, diabetes mellitus, and alcohol abuse. Significant predictors of mortality included postoperative coma \u3c24 hours, preoperative or postoperative ventilator dependence, black or Asian race, and stroke. The most common complications were ventilator dependence (64.5% in patients ventilated preoperatively, 8.4% in patients not ventilated preoperatively), bleeding requiring transfusion (10.2%), reoperation within 30 days (9.6%), pneumonia (7.3%), and stroke (7.3%). CONCLUSION: Cerebrovascular surgery is associated with significant risks of morbidity and mortality. Mitigation of these risks requires broader, patient-centered understanding of risk factors and complications specific to cerebrovascular surgery, as presented in this article. These findings pave the way for improving patient safety and outcomes in cerebrovascular surgery

    Prospective, multidisciplinary recording of perioperative errors in cerebrovascular surgery: Is error in the eye of the beholder?

    No full text
    Objective Surgery requires careful coordination of multiple team members, each playing a vital role in mitigating errors. Previous studies have focused on eliciting errors from only the attending surgeon, likely missing events observed by other team members. methods Surveys were administered to the attending surgeon, resident surgeon, anesthesiologist, and nursing staff immediately following each of 31 cerebrovascular surgeries; participants were instructed to record any deviation from optimal course (DOC). DOCs were categorized and sorted by reporter and perioperative timing, then correlated with delays and outcome measures. results Errors were recorded in 93.5% of the 31 cases surveyed. The number of errors recorded per case ranged from 0 to 8, with an average of 3.1 ± 2.1 errors (± SD). Overall, technical errors were most common (24.5%), followed by communication (22.4%), management/judgment (16.0%), and equipment (11.7%). The resident surgeon reported the most errors (52.1%), followed by the circulating nurse (31.9%), the attending surgeon (26.6%), and the anesthesiologist (14.9%). The attending and resident surgeons were most likely to report technical errors (52% and 30.6%, respectively), while anesthesiologists and circulating nurses mostly reported anesthesia errors (36%) and communication errors (50%), respectively. The overlap in reported errors was 20.3%. If this study had used only the surveys completed by the attending surgeon, as in prior studies, 72% of equipment errors, 90% of anesthesia and communication errors, and 100% of nursing errors would have been missed. In addition, it would have been concluded that errors occurred in only 45.2% of cases (rather than 93.5%) and that errors resulting in a delay occurred in 3.2% of cases instead of the 74.2% calculated using data from 4 team members. Compiled results from all team members yielded significant correlations between technical DOCs and prolonged hospital stays and reported and actual delays (p = 0.001 and p = 0.028, respectively). coNclusioNs This study is the only of its kind to elicit error reporting from multiple members of the operating team, and it demonstrates error is truly in the eye of the beholder-the types and timing of perioperative errors vary based on whom you ask. The authors estimate that previous studies surveying only the attending physician missed up to 75% of perioperative errors. By finding significant correlations between technical DOCs and prolonged hospital stays and reported and actual delays, this study shows that these surveys provide relevant and useful information for improving clinical practice. Overall, the results of this study emphasize that research on medical error must include input from all members of the operating team; it is only by understanding every perspective that surgical staff can begin to efficiently prevent errors, improve patient care and safety, and decrease delays

    Maximal killing of lymphoma cells by DNA damage–inducing therapy requires not only the p53 targets Puma and Noxa, but also Bim

    No full text
    DNA-damaging chemotherapy is the backbone of cancer treatment, although it is not clear how such treatments kill tumor cells. In nontransformed lymphoid cells, the combined loss of 2 proapoptotic p53 target genes, Puma and Noxa, induces as much resistance to DNA damage as loss of p53 itself. In EÎŒ-Myc lymphomas, however, lack of both Puma and Noxa resulted in no greater drug resistance than lack of Puma alone. A third B-cell lymphoma-2 homology domain (BH)3-only gene, Bim, although not a direct p53 target, was up-regulated in EÎŒ-Myc lymphomas incurring DNA damage, and knockdown of Bim levels markedly increased the drug resistance of EÎŒ-Myc/Puma−/−Noxa−/− lymphomas both in vitro and in vivo. Remarkably, c-MYC–driven lymphoma cell lines from Noxa−/−Puma−/−Bim−/− mice were as resistant as those lacking p53. Thus, the combinatorial action of Puma, Noxa, and Bim is critical for optimal apoptotic responses of lymphoma cells to 2 commonly used DNA-damaging chemotherapeutic agents, identifying Bim as an additional biomarker for treatment outcome in the clinic

    Telemedicine retinopathy of prematurity severity score (TeleROP-SS) versus modified activity score (mROP-ActS) retrospective comparison in SUNDROP cohort

    No full text
    Abstract Identifying and planning treatment for retinopathy of prematurity (ROP) using telemedicine is becoming increasingly ubiquitous, necessitating a grading system to help caretakers of at-risk infants gauge disease severity. The modified ROP Activity Scale (mROP-ActS) factors zone, stage, and plus disease into its scoring system, addressing the need for assessing ROP’s totality of binocular burden via indirect ophthalmoscopy. However, there is an unmet need for an alternative score which could facilitate ROP identification and gauge disease improvement or deterioration specifically on photographic telemedicine exams. Here, we propose such a system (Telemedicine ROP Severity Score [TeleROP-SS]), which we have compared against the mROP-ActS. In our statistical analysis of 1568 exams, we saw that TeleROP-SS was able to return a score in all instances based on the gradings available from the retrospective SUNDROP cohort, while mROP-ActS obtained a score of 80.8% in right eyes and 81.1% in left eyes. For treatment-warranted ROP (TW-ROP), TeleROP-SS obtained a score of 100% and 95% in the right and left eyes respectively, while mROP-ActS obtained a score of 70% and 63% respectively. The TeleROP-SS score can identify disease improvement or deterioration on telemedicine exams, distinguish timepoints at which treatments can be given, and it has the adaptability to be modified as needed
    corecore