150 research outputs found
Evaluating statistical language models as pragmatic reasoners
The relationship between communicated language and intended meaning is often
probabilistic and sensitive to context. Numerous strategies attempt to estimate
such a mapping, often leveraging recursive Bayesian models of communication. In
parallel, large language models (LLMs) have been increasingly applied to
semantic parsing applications, tasked with inferring logical representations
from natural language. While existing LLM explorations have been largely
restricted to literal language use, in this work, we evaluate the capacity of
LLMs to infer the meanings of pragmatic utterances. Specifically, we explore
the case of threshold estimation on the gradable adjective ``strong'',
contextually conditioned on a strength prior, then extended to composition with
qualification, negation, polarity inversion, and class comparison. We find that
LLMs can derive context-grounded, human-like distributions over the
interpretations of several complex pragmatic utterances, yet struggle composing
with negation. These results inform the inferential capacity of statistical
language models, and their use in pragmatic and semantic parsing applications.
All corresponding code is made publicly available
(https://github.com/benlipkin/probsem/tree/CogSci2023).Comment: 8 pages, 4 figures, to appear in the Proceedings of the Annual
Meeting of the Cognitive Science Society 202
Sequential Monte Carlo Steering of Large Language Models using Probabilistic Programs
Even after fine-tuning and reinforcement learning, large language models
(LLMs) can be difficult, if not impossible, to control reliably with prompts
alone. We propose a new inference-time approach to enforcing syntactic and
semantic constraints on the outputs of LLMs, called sequential Monte Carlo
(SMC) steering. The key idea is to specify language generation tasks as
posterior inference problems in a class of discrete probabilistic sequence
models, and replace standard decoding with sequential Monte Carlo inference.
For a computational cost similar to that of beam search, SMC can steer LLMs to
solve diverse tasks, including infilling, generation under syntactic
constraints, and prompt intersection. To facilitate experimentation with SMC
steering, we present a probabilistic programming library, LLaMPPL
(https://github.com/probcomp/hfppl), for concisely specifying new generation
tasks as language model probabilistic programs, and automating steering of
LLaMA-family Transformers.Comment: Minor typo fixe
LILO: Learning Interpretable Libraries by Compressing and Documenting Code
While large language models (LLMs) now excel at code generation, a key aspect
of software development is the art of refactoring: consolidating code into
libraries of reusable and readable programs. In this paper, we introduce LILO,
a neurosymbolic framework that iteratively synthesizes, compresses, and
documents code to build libraries tailored to particular problem domains. LILO
combines LLM-guided program synthesis with recent algorithmic advances in
automated refactoring from Stitch: a symbolic compression system that
efficiently identifies optimal lambda abstractions across large code corpora.
To make these abstractions interpretable, we introduce an auto-documentation
(AutoDoc) procedure that infers natural language names and docstrings based on
contextual examples of usage. In addition to improving human readability, we
find that AutoDoc boosts performance by helping LILO's synthesizer to interpret
and deploy learned abstractions. We evaluate LILO on three inductive program
synthesis benchmarks for string editing, scene reasoning, and graphics
composition. Compared to existing neural and symbolic methods - including the
state-of-the-art library learning algorithm DreamCoder - LILO solves more
complex tasks and learns richer libraries that are grounded in linguistic
knowledge
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do
humans make meaning from language -- and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose \textit{rational meaning construction}, a computational
framework for language-informed thinking that combines neural models of
language with probabilistic models for rational inference. We frame linguistic
meaning as a context-sensitive mapping from natural language into a
\textit{probabilistic language of thought} (PLoT) -- a general-purpose symbolic
substrate for probabilistic, generative world modeling. Our architecture
integrates two powerful computational tools that have not previously come
together: we model thinking with \textit{probabilistic programs}, an expressive
representation for flexible commonsense reasoning; and we model meaning
construction with \textit{large language models} (LLMs), which support
broad-coverage translation from natural language utterances to code expressions
in a probabilistic programming language. We illustrate our framework in action
through examples covering four core domains from cognitive science:
probabilistic reasoning, logical and relational reasoning, visual and physical
reasoning, and social reasoning about agents and their plans. In each, we show
that LLMs can generate context-sensitive translations that capture
pragmatically-appropriate linguistic meanings, while Bayesian inference with
the generated programs supports coherent and robust commonsense reasoning. We
extend our framework to integrate cognitively-motivated symbolic modules to
provide a unified commonsense thinking interface from language. Finally, we
explore how language can drive the construction of world models themselves
Chikungunya Disease: Infection-Associated Markers from the Acute to the Chronic Phase of Arbovirus-Induced Arthralgia
At the end of 2005, an outbreak of fever associated with joint pain occurred in La Réunion. The causal agent, chikungunya virus (CHIKV), has been known for 50 years and could thus be readily identified. This arbovirus is present worldwide, particularly in India, but also in Europe, with new variants returning to Africa. In humans, it causes a disease characterized by a typical acute infection, sometimes followed by persistent arthralgia and myalgia lasting months or years. Investigations in the La Réunion cohort and studies in a macaque model of chikungunya implicated monocytes-macrophages in viral persistence. In this Review, we consider the relationship between CHIKV and the immune response and discuss predictive factors for chronic arthralgia and myalgia by providing an overview of current knowledge on chikungunya pathogenesis. Comparisons of data from animal models of the acute and chronic phases of infection, and data from clinical series, provide information about the mechanisms of CHIKV infection–associated inflammation, viral persistence in monocytes-macrophages, and their link to chronic signs
More rapid blood interferon α2 decline in fatal versus surviving COVID-19 patients
BackgroundThe clinical outcome of COVID-19 pneumonia is highly variable. Few biological predictive factors have been identified. Genetic and immunological studies suggest that type 1 interferons (IFN) are essential to control SARS-CoV-2 infection.ObjectiveTo study the link between change in blood IFN-α2 level and plasma SARS-Cov2 viral load over time and subsequent death in patients with severe and critical COVID-19.MethodsOne hundred and forty patients from the CORIMUNO-19 cohort hospitalized with severe or critical COVID-19 pneumonia, all requiring oxygen or ventilation, were prospectively studied. Blood IFN-α2 was evaluated using the Single Molecule Array technology. Anti-IFN-α2 auto-Abs were determined with a reporter luciferase activity. Plasma SARS-Cov2 viral load was measured using droplet digital PCR targeting the Nucleocapsid gene of the SARS-CoV-2 positive-strand RNA genome.ResultsAlthough the percentage of plasmacytoid dendritic cells was low, the blood IFN-α2 level was higher in patients than in healthy controls and was correlated to SARS-CoV-2 plasma viral load at entry. Neutralizing anti-IFN-α2 auto-antibodies were detected in 5% of patients, associated with a lower baseline level of blood IFN-α2. A longitudinal analysis found that a more rapid decline of blood IFN-α2 was observed in fatal versus surviving patients: mortality HR=3.15 (95% CI 1.14–8.66) in rapid versus slow decliners. Likewise, a high level of plasma SARS-CoV-2 RNA was associated with death risk in patients with severe COVID-19.ConclusionThese findings could suggest an interest in evaluating type 1 IFN treatment in patients with severe COVID-19 and type 1 IFN decline, eventually combined with anti-inflammatory drugs.Clinical trial registrationhttps://clinicaltrials.gov, identifiers NCT04324073, NCT04331808, NCT04341584
Discutindo a educação ambiental no cotidiano escolar: desenvolvimento de projetos na escola formação inicial e continuada de professores
A presente pesquisa buscou discutir como a Educação Ambiental (EA) vem sendo trabalhada, no Ensino Fundamental e como os docentes desta escola compreendem e vem inserindo a EA no cotidiano escolar., em uma escola estadual do município de Tangará da Serra/MT, Brasil. Para tanto, realizou-se entrevistas com os professores que fazem parte de um projeto interdisciplinar de EA na escola pesquisada. Verificou-se que o projeto da escola não vem conseguindo alcançar os objetivos propostos por: desconhecimento do mesmo, pelos professores; formação deficiente dos professores, não entendimento da EA como processo de ensino-aprendizagem, falta de recursos didáticos, planejamento inadequado das atividades. A partir dessa constatação, procurou-se debater a impossibilidade de tratar do tema fora do trabalho interdisciplinar, bem como, e principalmente, a importância de um estudo mais aprofundado de EA, vinculando teoria e prática, tanto na formação docente, como em projetos escolares, a fim de fugir do tradicional vínculo “EA e ecologia, lixo e horta”.Facultad de Humanidades y Ciencias de la Educació
- …