842 research outputs found

    From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought

    Full text link
    How does language inform our downstream thinking? In particular, how do humans make meaning from language -- and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose \textit{rational meaning construction}, a computational framework for language-informed thinking that combines neural models of language with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a \textit{probabilistic language of thought} (PLoT) -- a general-purpose symbolic substrate for probabilistic, generative world modeling. Our architecture integrates two powerful computational tools that have not previously come together: we model thinking with \textit{probabilistic programs}, an expressive representation for flexible commonsense reasoning; and we model meaning construction with \textit{large language models} (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework in action through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning about agents and their plans. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves

    Interpretation of Natural-language Robot Instructions: Probabilistic Knowledge Representation, Learning, and Reasoning

    Get PDF
    A robot that can be simply told in natural language what to do -- this has been one of the ultimate long-standing goals in both Artificial Intelligence and Robotics research. In near-future applications, robotic assistants and companions will have to understand and perform commands such as set the table for dinner'', make pancakes for breakfast'', or cut the pizza into 8 pieces.'' Although such instructions are only vaguely formulated, complex sequences of sophisticated and accurate manipulation activities need to be carried out in order to accomplish the respective tasks. The acquisition of knowledge about how to perform these activities from huge collections of natural-language instructions from the Internet has garnered a lot of attention within the last decade. However, natural language is typically massively unspecific, incomplete, ambiguous and vague and thus requires powerful means for interpretation. This work presents PRAC -- Probabilistic Action Cores -- an interpreter for natural-language instructions which is able to resolve vagueness and ambiguity in natural language and infer missing information pieces that are required to render an instruction executable by a robot. To this end, PRAC formulates the problem of instruction interpretation as a reasoning problem in first-order probabilistic knowledge bases. In particular, the system uses Markov logic networks as a carrier formalism for encoding uncertain knowledge. A novel framework for reasoning about unmodeled symbolic concepts is introduced, which incorporates ontological knowledge from taxonomies and exploits semantically similar relational structures in a domain of discourse. The resulting reasoning framework thus enables more compact representations of knowledge and exhibits strong generalization performance when being learnt from very sparse data. Furthermore, a novel approach for completing directives is presented, which applies semantic analogical reasoning to transfer knowledge collected from thousands of natural-language instruction sheets to new situations. In addition, a cohesive processing pipeline is described that transforms vague and incomplete task formulations into sequences of formally specified robot plans. The system is connected to a plan executive that is able to execute the computed plans in a simulator. Experiments conducted in a publicly accessible, browser-based web interface showcase that PRAC is capable of closing the loop from natural-language instructions to their execution by a robot

    Semantics of fuzzy quantifiers

    Get PDF
    The aim of this thesis is to discuss the semantics of FQs (fuzzy quantifiers), formal semantics in particular. The approach used is fuzzy semantic based on fuzzy set theory (Zadeh 1965, 1975), i.e. we explore primarily the denotational meaning of FQs represented by membership functions. Some empirical data from both Chinese and English is used for illustration. A distinguishing characteristic of the semantics of FQs like about 200 students and many students as opposed to other sorts of quantifiers like every student and no students, is that they have fuzzy meaning boundaries. There is considerable evidence to suggest that the doctrine that a proposition is either true or false has a limited application in natural languages, which raises a serious question towards any linguistic theories that are based on a binary assumption. In other words, the number of elements in a domain that must satisfy a predicate is not precisety given by an FQ and so a proposition con¬ taining one may be more or less true depending on how closely numbers of elements approximate to a given norm. The most significant conclusion drawn here is that FQs are compositional in that FQs of the same type function in the same way to generate a constant semantic pattern. It is argued that although basic membership functions are subject to modification depending on context, they vary only with certain limits (i.e. FQs are motivated—neither completely predicated nor completely arbitrary), which does not deny compositionality in any way. A distinctive combination of compositionality and motivation of FQs makes my formal semantic framework of FQs unique in the way that although some specific values, such as a norm, have to be determined pragmatically, semantic and inferential patterns are systematic and predictable. A number of interdisciplinary implications, such as semantic, general linguistic, logic and psychological, are discussed. The study here seems to be a somewhat troublesome but potentially important area for developing theories (and machines) capable of dealing with, and accounting for, natural languages

    A Stalnakerian Analysis of Metafictive Statements

    Get PDF

    Human reasoning and cognitive science

    Get PDF
    In the late summer of 1998, the authors, a cognitive scientist and a logician, started talking about the relevance of modern mathematical logic to the study of human reasoning, and we have been talking ever since. This book is an interim report of that conversation. It argues that results such as those on the Wason selection task, purportedly showing the irrelevance of formal logic to actual human reasoning, have been widely misinterpreted, mainly because the picture of logic current in psychology and cognitive science is completely mistaken. We aim to give the reader a more accurate picture of mathematical logic and, in doing so, hope to show that logic, properly conceived, is still a very helpful tool in cognitive science. The main thrust of the book is therefore constructive. We give a number of examples in which logical theorizing helps in understanding and modeling observed behavior in reasoning tasks, deviations of that behavior in a psychiatric disorder (autism), and even the roots of that behavior in the evolution of the brain

    Dwelling on ontology - semantic reasoning over topographic maps

    Get PDF
    The thesis builds upon the hypothesis that the spatial arrangement of topographic features, such as buildings, roads and other land cover parcels, indicates how land is used. The aim is to make this kind of high-level semantic information explicit within topographic data. There is an increasing need to share and use data for a wider range of purposes, and to make data more definitive, intelligent and accessible. Unfortunately, we still encounter a gap between low-level data representations and high-level concepts that typify human qualitative spatial reasoning. The thesis adopts an ontological approach to bridge this gap and to derive functional information by using standard reasoning mechanisms offered by logic-based knowledge representation formalisms. It formulates a framework for the processes involved in interpreting land use information from topographic maps. Land use is a high-level abstract concept, but it is also an observable fact intimately tied to geography. By decomposing this relationship, the thesis correlates a one-to-one mapping between high-level conceptualisations established from human knowledge and real world entities represented in the data. Based on a middle-out approach, it develops a conceptual model that incrementally links different levels of detail, and thereby derives coarser, more meaningful descriptions from more detailed ones. The thesis verifies its proposed ideas by implementing an ontology describing the land use ‘residential area’ in the ontology editor Protégé. By asserting knowledge about high-level concepts such as types of dwellings, urban blocks and residential districts as well as individuals that link directly to topographic features stored in the database, the reasoner successfully infers instances of the defined classes. Despite current technological limitations, ontologies are a promising way forward in the manner we handle and integrate geographic data, especially with respect to how humans conceptualise geographic space

    The harmful feature of generics

    Get PDF
    Various experimental studies (especially Gelman et al. 2010, and Rhodes et al. 2012) provided evidence that generics (namely, sentences like \u201cbirds fly\u201d and \u201cducks lay eggs\u201d) promote the essentialization of the categories they are about. That is, generics lead to believe that the categories they are about have an underlying nature responsible for the similarities among the category members. In this dissertation, I\u2019m interested in what linguistic features of generics, if any, make them particularly suited to promote the essentialization of the categories they are about. To do that, I rely on Sally Haslanger's (2011, 2012, 2014) proposal, according to which generics convey that the connection between the members of the category (Ks) and the predicated property (F) \u201cholds primarily by virtue of some important fact about the Ks as such\u201d (Haslanger 2012: 450). In the first chapter, I provide some linguistic background and I identify my the target of my research. I\u2019m not concerned with all the sentences that have been referred to as \u201cgeneric\u201d, but only on what Bernhard Nickel (2016) calls \u201ccharacteristic generic\u201d, and only with those that have a Bare Plural subject NP. In the second chapter, I present Ariel Cohen\u2019s (1996) semantics of generics. His theory accounts for the statistic variability of generics on probabilistic grounds. As I will argue, it is a merit of this theory that it makes no reference to normality. The third chapter is devoted to the topic of essentialism. I present Leslie\u2019s hypothesis that generics foster the essentialization of the categories they are about and some empirical evidence supporting it. Then, I present Jennifer Saul\u2019s (2017) objection to these experiments and I argue that a better understanding of the phenomenon is needed. Finally, I take into account Haslanger\u2019s proposal. I show how it can account for two phenomena: the promotion of essentialization and the different generalizations generics can convey. I conclude the chapter by pointing out that Haslanger doesn\u2019t take a stand on whether the robustness proposition is a presupposition or an implicature. Investigating this point is the main aim of the fourth chapter. Here I introduce presuppositions and implicatures with their features and distinctions. Then, I apply the linguistic tests, concluding that the robustness proposition is a generalized conversational implicature. I proceed by presenting Levinson\u2019s theory, which I employ to explain how the implicature arises. In the last section, I discuss the explanatory implicature. The fifth and last chapter explores alternative explanations and the consequences of the results of chapter four. I exclude that the robustness proposition is implicated by utterances involving kind terms and I argue that it is derived through abduction with some quantified sentences. I then take into account the hypothesis, predicted by Levinson\u2019s theory, that quantified sentences convey an implicature complementary to the one of generics. I show that this is not the case and that quantified sentences are not the only marked form lacking the complementary implicature predicted by Levinson\u2019s principles: technical terms and extended expressions do not convey it either. Based on this data, I propose a revision of Levinson\u2019s M-principle which, as I show, does not predict that quantified sentences, technical terms, and extended expressions convey a complementary implicature. I conclude the chapter by motivating why the robustness proposition cannot be a clausal implicature
    • …
    corecore