213,816 research outputs found

    An Intelligence-Aware Process Calculus for Multi-Agent System Modeling

    Get PDF
    In this paper we propose an agent modeling language named CAML that provides a comprehensive framework for representing all relevant aspects of a multi-agent system: specially, its configuration and the reasoning abilities of its constituent agents. The configuration modeling aspect of the language supports natural grouping and mobility, and the reasoning framework is inspired by an extension of the popular BDI theory of modeling cognitive skills of agents. We present the motivation behind the development of the language, its syntax, and an informal semantics

    ScriptWorld: Text Based Environment For Learning Procedural Knowledge

    Full text link
    Text-based games provide a framework for developing natural language understanding and commonsense knowledge about the world in reinforcement learning based agents. Existing text-based environments often rely on fictional situations and characters to create a gaming framework and are far from real-world scenarios. In this paper, we introduce ScriptWorld: a text-based environment for teaching agents about real-world daily chores and hence imparting commonsense knowledge. To the best of our knowledge, it is the first interactive text-based gaming framework that consists of daily real-world human activities designed using scripts dataset. We provide gaming environments for 10 daily activities and perform a detailed analysis of the proposed environment. We develop RL-based baseline models/agents to play the games in Scriptworld. To understand the role of language models in such environments, we leverage features obtained from pre-trained language models in the RL agents. Our experiments show that prior knowledge obtained from a pre-trained language model helps to solve real-world text-based gaming environments. We release the environment via Github: https://github.com/Exploration-Lab/ScriptWorldComment: Accepted at IJCAI 2023, 26 Pages (7 main + 19 for appendix

    From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought

    Full text link
    How does language inform our downstream thinking? In particular, how do humans make meaning from language -- and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose \textit{rational meaning construction}, a computational framework for language-informed thinking that combines neural models of language with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a \textit{probabilistic language of thought} (PLoT) -- a general-purpose symbolic substrate for probabilistic, generative world modeling. Our architecture integrates two powerful computational tools that have not previously come together: we model thinking with \textit{probabilistic programs}, an expressive representation for flexible commonsense reasoning; and we model meaning construction with \textit{large language models} (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework in action through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning about agents and their plans. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves

    Natural task learning through simultaneous language grounding and action learning

    Get PDF
    Artificial agents and in particular robots, i.e. agents with some form of embodiment, provide nearly unlimited possibilities to support humans in their daily lives by reliably performing hazardous, repetitive, and physically demanding tasks, removing the risk of human errors, and providing social, mental, and physical care as needed, and around-the-clock. However, for this, artificial agents need to be able to communicate with other agents, in particular humans, in a natural and efficient manner, and to autonomously learn new tasks. The most natural way for humans to tell another agent to perform a task or to explain how to perform a task is through natural language. Therefore, artificial agents need to be able to understand natural language, i.e. extract the meanings of words and phrases, which requires words and phrases to be linked to their corresponding percepts through grounding. Theoretically, groundings, i.e. connections between words and percepts, can be manually specified, however, in practice this is not possible due to the complexity and dynamicity of human-centered environments, like private homes or supermarkets, and the ambiguity inherent to natural language, e.g. synonymy and homonymy. Therefore, agents need to be able to autonomously obtain new groundings and continuously update existing groundings to account for changes in the environment and incorporate new information obtained through the agent’s sensors. Furthermore, the obtained groundings should be utilizable to learn new tasks from natural language instructions. Therefore, this thesis proposes a novel framework for simultaneous language grounding and action learning that achieves three main objectives. First, it enables agents to continuously ground synonymous words and phrases without requiring external support by another agent. Second, it enables agents to utilize external support, if available, without depending on it. Finally, it enables agents to utilize previously learned groundings to learn new tasks from language instructions
    • …
    corecore