148,163 research outputs found

    From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought

    Full text link
    How does language inform our downstream thinking? In particular, how do humans make meaning from language -- and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose \textit{rational meaning construction}, a computational framework for language-informed thinking that combines neural models of language with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a \textit{probabilistic language of thought} (PLoT) -- a general-purpose symbolic substrate for probabilistic, generative world modeling. Our architecture integrates two powerful computational tools that have not previously come together: we model thinking with \textit{probabilistic programs}, an expressive representation for flexible commonsense reasoning; and we model meaning construction with \textit{large language models} (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework in action through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning about agents and their plans. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves

    Stability and Strategic Time-Dependent Behaviour in Multiagent Systems

    Get PDF
    Temporal reasoning and strategic behaviour are important abilities of multiagent systems. We introduce a game-theoretic framework suitable for modelling selfish and rational agents which can store and reason about the evolution of an environment, and act according to their interests. Our aim is to identify stable interactions: those where no agent has a benefit from changing his behaviour to another. For this reason we deploy the game-theoretic concept of Nash equilibrium and strong Nash equilibrium. We show that not all agent interactions can be stable. Also, we investigate the computational complexity for verifying and checking the existence of stable agent interactions. This paves the way for developing agents which can take appropriate decisions in competitive and strategic situations

    Conditional Partial Plans for Rational Situated Agents Capable of Deductive Reasoning and Inductive Learning

    Get PDF
    Rational, autonomous agents that are able to achieve their goals in dynamic, partially observable environments are the ultimate dream of Artificial Intelligence research since its beginning. The goal of this PhD thesis is to propose, develop and evaluate a framework well suited for creating intelligent agents that would be able to learn from experience, thus becoming more efficient at solving their tasks. We aim to create an agent able to function in adverse environments that it only partially understands. We are convinced that symbolic knowledge representations are the best way to achieve such versatility. In order to balance deliberation and acting, our agent needs to be emph{time-aware}, i.e. it needs to have the means to estimate its own reasoning and acting time. One of the crucial challenges is to ensure smooth interactions between the agent's internal reasoning mechanism and the learning system used to improve its behaviour. In order to address it, our agent will create several different conditional partial plans and reason about the potential usefulness of each one. Moreover it will generalise whatever experience it gathers and use it when solving subsequent, similar, problem instances. In this thesis we present on the conceptual level an architecture for rational agents, as well as implementation-based experimental results confirming that a successful lifelong learning of an autonomous artificial agent can be achieved using it

    Reasoning about Cognitive Trust in Stochastic Multiagent Systems

    Get PDF
    We consider the setting of stochastic multiagent systems modelled as stochastic multiplayer games and formulate an automated verification framework for quantifying and reasoning about agents’ trust. To capture human trust, we work with a cognitive notion of trust defined as a subjective evaluation that agent A makes about agent B’s ability to complete a task, which in turn may lead to a decision by A to rely on B. We propose a probabilistic rational temporal logic PRTL*, which extends the probabilistic computation tree logic PCTL* with reasoning about mental attitudes (beliefs, goals, and intentions) and includes novel operators that can express concepts of social trust such as competence, disposition, and dependence. The logic can express, for example, that “agent A will eventually trust agent B with probability at least p that B will behave in a way that ensures the successful completion of a given task.” We study the complexity of the automated verification problem and, while the general problem is undecidable, we identify restrictions on the logic and the system that result in decidable, or even tractable, subproblems

    One standard to rule them all?

    Get PDF
    It has been argued that an epistemically rational agent’s evidence is subjectively mediated through some rational epistemic standards, and that there are incompatible but equally rational epistemic standards available to agents. This supports Permissiveness, the view according to which one or multiple fully rational agents are permitted to take distinct incompatible doxastic attitudes towards P (relative to a body of evidence). In this paper, I argue that the above claims entail the existence of a unique and more reliable epistemic standard. My strategy relies on Condorcet’s Jury Theorem. This gives rise to an important problem for those who argue that epistemic standards are permissive, since the reliability criterion is incompatible with such a type of Permissiveness

    Evidence of Evidence as Higher Order Evidence

    Get PDF
    In everyday life and in science we acquire evidence of evidence and based on this new evidence we often change our epistemic states. An assumption underlying such practice is that the following EEE Slogan is correct: 'evidence of evidence is evidence' (Feldman 2007, p. 208). We suggest that evidence of evidence is best understood as higher-order evidence about the epistemic state of agents. In order to model evidence of evidence we introduce a new powerful framework for modelling epistemic states, Dyadic Bayesianism. Based on this framework, we then discuss characterizations of evidence of evidence and argue for one of them. Finally, we show that whether the EEE Slogan holds, depends on the specific kind of evidence of evidence

    Belief Change in Reasoning Agents: Axiomatizations, Semantics and Computations

    Get PDF
    The capability of changing beliefs upon new information in a rational and efficient way is crucial for an intelligent agent. Belief change therefore is one of the central research fields in Artificial Intelligence (AI) for over two decades. In the AI literature, two different kinds of belief change operations have been intensively investigated: belief update, which deal with situations where the new information describes changes of the world; and belief revision, which assumes the world is static. As another important research area in AI, reasoning about actions mainly studies the problem of representing and reasoning about effects of actions. These two research fields are closely related and apply a common underlying principle, that is, an agent should change its beliefs (knowledge) as little as possible whenever an adjustment is necessary. This lays down the possibility of reusing the ideas and results of one field in the other, and vice verse. This thesis aims to develop a general framework and devise computational models that are applicable in reasoning about actions. Firstly, I shall propose a new framework for iterated belief revision by introducing a new postulate to the existing AGM/DP postulates, which provides general criteria for the design of iterated revision operators. Secondly, based on the new framework, a concrete iterated revision operator is devised. The semantic model of the operator gives nice intuitions and helps to show its satisfiability of desirable postulates. I also show that the computational model of the operator is almost optimal in time and space-complexity. In order to deal with the belief change problem in multi-agent systems, I introduce a concept of mutual belief revision which is concerned with information exchange among agents. A concrete mutual revision operator is devised by generalizing the iterated revision operator. Likewise, a semantic model is used to show the intuition and many nice properties of the mutual revision operator, and the complexity of its computational model is formally analyzed. Finally, I present a belief update operator, which takes into account two important problems of reasoning about action, i.e., disjunctive updates and domain constraints. Again, the updated operator is presented with both a semantic model and a computational model

    The heuristic conception of inference to the best explanation

    Get PDF
    An influential suggestion about the relationship between Bayesianism and inference to the best explanation holds that IBE functions as a heuristic to approximate Bayesian reasoning. While this view promises to unify Bayesianism and IBE in a very attractive manner, important elements of the view have not yet been spelled out in detail. I present and argue for a heuristic conception of IBE on which IBE serves primarily to locate the most probable available explanatory hypothesis to serve as a working hypothesis in an agent’s further investigations. Along the way, I criticize what I consider to be an overly ambitious conception of the heuristic role of IBE, according to which IBE serves as a guide to absolute probability values. My own conception, by contrast, requires only that IBE can function as a guide to the comparative probability values of available hypotheses. This is shown to be a much more realistic role for IBE given the nature and limitations of the explanatory considerations with which IBE operates
    • 

    corecore