10 research outputs found

    Understanding Tradeoffs Between Food and Predation Risks in a Specialist Mammalian Herbivore

    Get PDF
    Understanding habitat use by animals requires understanding the simultaneous tradeoffs between food and predation risk within a landscape. Quantifying the synergy between patches that provide quality food and those that are safe from predators at a scale relevant to a foraging animal could better reveal the parameters that influence habitat selection. To understand more thoroughly how animals select habitat components, we investigated tradeoffs between diet quality and predation risk in a species endemic to sagebrush Artemisia spp. communities in North America, the pygmy rabbitBrachylagus idahoensis. This species is a rare example of a specialist herbivore that relies almost entirely on sagebrush for food and cover. We hypothesized that pygmy rabbits would forage in areas with low food risk (free of plant secondary metabolites, PSMs) and low predation risk (high concealment). However, because of relatively high tolerance to PSMs in sagebrush by pygmy rabbits, we hypothesized that they would trade off the risk of PSM-containing food to select lower predation risk when risks co-occurred. We compared food intake of pygmy rabbits during three double-choice trials designed to examine tradeoffs by offering animals two levels of food risk (1,8-cineole, a PSM) and predation risk (concealment cover). Rabbits ate more food at feeding stations with PSM-free food and high concealment cover. However, interactions between PSMs and cover suggested that the value of PSM-free food could be reduced if concealment is low and the value of high concealment can decrease if food contains PSMs. Furthermore, foraging decisions by individual rabbits suggested variation in tolerance of food or predation risks

    alphabeta T cell receptors as predictors of health and disease

    Get PDF
    The diversity of antigen receptors and the specificity it underlies are the hallmarks of the cellular arm of the adaptive immune system. T and B lymphocytes are indeed truly unique in their ability to generate receptors capable of recognizing virtually any pathogen. It has been known for several decades that T lymphocytes recognize short peptides derived from degraded proteins presented by major histocompatibility complex (MHC) molecules at the cell surface. Interaction between peptide-MHC (pMHC) and the T cell receptor (TCR) is central to both thymic selection and peripheral antigen recognition. It is widely assumed that TCR diversity is required, or at least highly desirable, to provide sufficient immune coverage. However, a number of immune responses are associated with the selection of predictable, narrow, or skewed repertoires and public TCR chains. Here, we summarize the current knowledge on the formation of the TCR repertoire and its maintenance in health and disease. We also outline the various molecular mechanisms that govern the composition of the pre-selection, naive and antigen-specific TCR repertoires. Finally, we suggest that with the development of high-throughput sequencing, common TCR \u27signatures\u27 raised against specific antigens could provide important diagnostic biomarkers and surrogate predictors of disease onset, progression and outcome

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Risk of COVID-19 after natural infection or vaccinationResearch in context

    No full text
    Summary: Background: While vaccines have established utility against COVID-19, phase 3 efficacy studies have generally not comprehensively evaluated protection provided by previous infection or hybrid immunity (previous infection plus vaccination). Individual patient data from US government-supported harmonized vaccine trials provide an unprecedented sample population to address this issue. We characterized the protective efficacy of previous SARS-CoV-2 infection and hybrid immunity against COVID-19 early in the pandemic over three-to six-month follow-up and compared with vaccine-associated protection. Methods: In this post-hoc cross-protocol analysis of the Moderna, AstraZeneca, Janssen, and Novavax COVID-19 vaccine clinical trials, we allocated participants into four groups based on previous-infection status at enrolment and treatment: no previous infection/placebo; previous infection/placebo; no previous infection/vaccine; and previous infection/vaccine. The main outcome was RT-PCR-confirmed COVID-19 >7–15 days (per original protocols) after final study injection. We calculated crude and adjusted efficacy measures. Findings: Previous infection/placebo participants had a 92% decreased risk of future COVID-19 compared to no previous infection/placebo participants (overall hazard ratio [HR] ratio: 0.08; 95% CI: 0.05–0.13). Among single-dose Janssen participants, hybrid immunity conferred greater protection than vaccine alone (HR: 0.03; 95% CI: 0.01–0.10). Too few infections were observed to draw statistical inferences comparing hybrid immunity to vaccine alone for other trials. Vaccination, previous infection, and hybrid immunity all provided near-complete protection against severe disease. Interpretation: Previous infection, any hybrid immunity, and two-dose vaccination all provided substantial protection against symptomatic and severe COVID-19 through the early Delta period. Thus, as a surrogate for natural infection, vaccination remains the safest approach to protection. Funding: National Institutes of Health

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants

    Non-standard errors

    No full text
    corecore