6 research outputs found

    Risk of COVID-19 after natural infection or vaccinationResearch in context

    No full text
    Summary: Background: While vaccines have established utility against COVID-19, phase 3 efficacy studies have generally not comprehensively evaluated protection provided by previous infection or hybrid immunity (previous infection plus vaccination). Individual patient data from US government-supported harmonized vaccine trials provide an unprecedented sample population to address this issue. We characterized the protective efficacy of previous SARS-CoV-2 infection and hybrid immunity against COVID-19 early in the pandemic over three-to six-month follow-up and compared with vaccine-associated protection. Methods: In this post-hoc cross-protocol analysis of the Moderna, AstraZeneca, Janssen, and Novavax COVID-19 vaccine clinical trials, we allocated participants into four groups based on previous-infection status at enrolment and treatment: no previous infection/placebo; previous infection/placebo; no previous infection/vaccine; and previous infection/vaccine. The main outcome was RT-PCR-confirmed COVID-19 >7–15 days (per original protocols) after final study injection. We calculated crude and adjusted efficacy measures. Findings: Previous infection/placebo participants had a 92% decreased risk of future COVID-19 compared to no previous infection/placebo participants (overall hazard ratio [HR] ratio: 0.08; 95% CI: 0.05–0.13). Among single-dose Janssen participants, hybrid immunity conferred greater protection than vaccine alone (HR: 0.03; 95% CI: 0.01–0.10). Too few infections were observed to draw statistical inferences comparing hybrid immunity to vaccine alone for other trials. Vaccination, previous infection, and hybrid immunity all provided near-complete protection against severe disease. Interpretation: Previous infection, any hybrid immunity, and two-dose vaccination all provided substantial protection against symptomatic and severe COVID-19 through the early Delta period. Thus, as a surrogate for natural infection, vaccination remains the safest approach to protection. Funding: National Institutes of Health

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)

    No full text
    In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field

    Diminishing benefits of urban living for children and adolescents’ growth and development

    Get PDF
    Optimal growth and development in childhood and adolescence is crucial for lifelong health and well-being1–6. Here we used data from 2,325 population-based studies, with measurements of height and weight from 71 million participants, to report the height and body-mass index (BMI) of children and adolescents aged 5–19 years on the basis of rural and urban place of residence in 200 countries and territories from 1990 to 2020. In 1990, children and adolescents residing in cities were taller than their rural counterparts in all but a few high-income countries. By 2020, the urban height advantage became smaller in most countries, and in many high-income western countries it reversed into a small urban-based disadvantage. The exception was for boys in most countries in sub-Saharan Africa and in some countries in Oceania, south Asia and the region of central Asia, Middle East and north Africa. In these countries, successive cohorts of boys from rural places either did not gain height or possibly became shorter, and hence fell further behind their urban peers. The difference between the age-standardized mean BMI of children in urban and rural areas was <1.1 kg m–2 in the vast majority of countries. Within this small range, BMI increased slightly more in cities than in rural areas, except in south Asia, sub-Saharan Africa and some countries in central and eastern Europe. Our results show that in much of the world, the growth and developmental advantages of living in cities have diminished in the twenty-first century, whereas in much of sub-Saharan Africa they have amplified

    Diminishing benefits of urban living for children and adolescents' growth and development

    No full text
    corecore