8 research outputs found

    A Systematic Molecular Pathology Study of a Laboratory Confirmed H5N1 Human Case

    Get PDF
    Autopsy studies have shown that human highly pathogenic avian influenza virus (H5N1) can infect multiple human organs other than just the lungs, and that possible causes of organ damage are either viral replication and/or dysregulation of cytokines and chemokines. Uncertainty still exists, partly because of the limited number of cases analysed. In this study, a full autopsy including 5 organ systems was conducted on a confirmed H5N1 human fatal case (male, 42 years old) within 18 hours of death. In addition to the respiratory system (lungs, bronchus and trachea), virus was isolated from cerebral cortex, cerebral medullary substance, cerebellum, brain stem, hippocampus ileum, colon, rectum, ureter, aortopulmonary vessel and lymph-node. Real time RT-PCR evidence showed that matrix and hemagglutinin genes were positive in liver and spleen in addition to positive tissues with virus isolation. Immunohistochemistry and in-situ hybridization stains showed accordant evidence of viral infection with real time RT-PCR except bronchus. Quantitative RT-PCR suggested that a high viral load was associated with increased host responses, though the viral load was significantly different in various organs. Cells of the immunologic system could also be a target for virus infection. Overall, the pathogenesis of HPAI H5N1 virus was associated both with virus replication and with immunopathologic lesions. In addition, immune cells cannot be excluded from playing a role in dissemination of the virus in vivo

    Machine Learning Reveals the Critical Interactions for SARS-CoV-2 Spike Protein Binding to ACE2

    No full text
    [Image: see text] SARS-CoV and SARS-CoV-2 bind to the human ACE2 receptor in practically identical conformations, although several residues of the receptor-binding domain (RBD) differ between them. Herein, we have used molecular dynamics (MD) simulations, machine learning (ML), and free-energy perturbation (FEP) calculations to elucidate the differences in binding by the two viruses. Although only subtle differences were observed from the initial MD simulations of the two RBD–ACE2 complexes, ML identified the individual residues with the most distinctive ACE2 interactions, many of which have been highlighted in previous experimental studies. FEP calculations quantified the corresponding differences in binding free energies to ACE2, and examination of MD trajectories provided structural explanations for these differences. Lastly, the energetics of emerging SARS-CoV-2 mutations were studied, showing that the affinity of the RBD for ACE2 is increased by N501Y and E484K mutations but is slightly decreased by K417N

    The enhanced x-ray timing and polarimetry mission – eXTP: an update on its scientific cases, mission profile and development status

    No full text
    The enhanced x-ray timing and polarimetry mission (eXTP) is a flagship observatory for x-ray timing, spectroscopy and polarimetry developed by an international consortium. Thanks to its very large collecting area, good spectral resolution and unprecedented polarimetry capabilities, eXTP will explore the properties of matter and the propagation of light in the most extreme conditions found in the universe. eXTP will, in addition, be a powerful x-ray observatory. The mission will continuously monitor the x-ray sky, and will enable multi-wavelength and multi-messenger studies. The mission is currently in phase B, which will be completed in the middle of 2022

    Comparison of diagnoses of early-onset sepsis associated with use of Sepsis Risk Calculator versus NICE CG149: a prospective, population-wide cohort study in London, UK, 2020–2021

    No full text
    Objective We sought to compare the incidence of early-onset sepsis (EOS) in infants ≥34 weeks’ gestation identified >24 hours after birth, in hospitals using the Kaiser Permanente Sepsis Risk Calculator (SRC) with hospitals using the National Institute for Health and Care Excellence (NICE) guidance.Design and setting Prospective observational population-wide cohort study involving all 26 hospitals with neonatal units colocated with maternity services across London (10 using SRC, 16 using NICE).Participants All live births ≥34 weeks’ gestation between September 2020 and August 2021.Outcome measures EOS was defined as isolation of a bacterial pathogen in the blood or cerebrospinal fluid (CSF) culture from birth to 7 days of age. We evaluated the incidence of EOS identified by culture obtained >24 hours to 7 days after birth. We also evaluated the rate empiric antibiotics were commenced >24 hours to 7 days after birth, for a duration of ≥5 days, with negative blood or CSF cultures.Results Of 99 683 live births, 42 952 (43%) were born in SRC hospitals and 56 731 (57%) in NICE hospitals. The overall incidence of EOS (<72 hours) was 0.64/1000 live births. The incidence of EOS identified >24 hours was 2.3/100 000 (n=1) for SRC vs 7.1/100 000 (n=4) for NICE (OR 0.5, 95% CI (0.1 to 2.7)). This corresponded to (1/20) 5% (SRC) vs (4/45) 8.9% (NICE) of EOS cases (χ=0.3, p=0.59). Empiric antibiotics were commenced >24 hours to 7 days after birth in 4.4/1000 (n=187) for SRC vs 2.9/1000 (n=158) for NICE (OR 1.5, 95% CI (1.2 to 1.9)). 3111 (7%) infants received antibiotics in the first 24 hours in SRC hospitals vs 8428 (15%) in NICE hospitals.Conclusion There was no significant difference in the incidence of EOS identified >24 hours after birth between SRC and NICE hospitals. SRC use was associated with 50% fewer infants receiving antibiotics in the first 24 hours of life

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore