8 research outputs found

    Relationship between molecular pathogen detection and clinical disease in febrile children across Europe: a multicentre, prospective observational study

    Get PDF
    BackgroundThe PERFORM study aimed to understand causes of febrile childhood illness by comparing molecular pathogen detection with current clinical practice.MethodsFebrile children and controls were recruited on presentation to hospital in 9 European countries 2016-2020. Each child was assigned a standardized diagnostic category based on retrospective review of local clinical and microbiological data. Subsequently, centralised molecular tests (CMTs) for 19 respiratory and 27 blood pathogens were performed.FindingsOf 4611 febrile children, 643 (14%) were classified as definite bacterial infection (DB), 491 (11%) as definite viral infection (DV), and 3477 (75%) had uncertain aetiology. 1061 controls without infection were recruited. CMTs detected blood bacteria more frequently in DB than DV cases for N. meningitidis (OR: 3.37, 95% CI: 1.92-5.99), S. pneumoniae (OR: 3.89, 95% CI: 2.07-7.59), Group A streptococcus (OR 2.73, 95% CI 1.13-6.09) and E. coli (OR 2.7, 95% CI 1.02-6.71). Respiratory viruses were more common in febrile children than controls, but only influenza A (OR 0.24, 95% CI 0.11-0.46), influenza B (OR 0.12, 95% CI 0.02-0.37) and RSV (OR 0.16, 95% CI: 0.06-0.36) were less common in DB than DV cases. Of 16 blood viruses, enterovirus (OR 0.43, 95% CI 0.23-0.72) and EBV (OR 0.71, 95% CI 0.56-0.90) were detected less often in DB than DV cases. Combined local diagnostics and CMTs respectively detected blood viruses and respiratory viruses in 360 (56%) and 161 (25%) of DB cases, and virus detection ruled-out bacterial infection poorly, with predictive values of 0.64 and 0.68 respectively.InterpretationMost febrile children cannot be conclusively defined as having bacterial or viral infection when molecular tests supplement conventional approaches. Viruses are detected in most patients with bacterial infections, and the clinical value of individual pathogen detection in determining treatment is low. New approaches are needed to help determine which febrile children require antibiotics.FundingEU Horizon 2020 grant 668303

    La despensa nacional: Quinoa and the Spatial Contradictions of Peru’s Gastronomic Revolution

    No full text
    This article explores the spatial politics of Peru’s gastronomic revolution and corresponding efforts to territorialize Peruvian agricultural products by tracing the spatial dynamics of quinoa’s trajectory from highland dietary staple to coveted national food. Efforts to codify Peru’s national cuisine have involved mapping ingredients and dishes onto specific regions while dramatically reshaping agricultural production geographies and culinary topographies. Because of quinoa’s success as a high-value export crop, the Peruvian altiplano is no longer perceived as a landscape useful exclusively for livestock pasture and mining. Instead, it is imagined as agriculturally productive: the country’s quinoa heartland. At the same time, quinoa’s trajectory illuminates spatial contradictions in the gastronomic boom’s purported objectives and its tangible effects. The revalorization of quinoa led to a geographical expansion of its production outside the high Andes, undermining the spatially bound concepts of authenticity promoted by gastronomic leaders in Peru. Broadly, efforts to commercialize marginalized food products and their corresponding regions can at once reconfigure territorial discourses in important ways, reinforce long-standing geographical inequalities, and generate contestations of the geographic imaginaries of food and nation

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore