265,052 research outputs found

    A Path in the Jungle of Logics for Multi-Agent Systems: on the Relation between General Game-Playing Logics and Seeing-To-It-That Logics

    Get PDF
    In the recent years, several concurrent logical systems for reasoning about agency and social interaction and for representing game properties have been proposed. The aim of the present paper is to put some order in this 'jungle' of logics by studying the relationship between the dynamic logic of agency DLA and the game description language GDL. The former has been proposed as a variant of the logic of agency STIT by Belnap et al. in which agents' action are named, while the latter has been introduced in AI as a formal language for reasoning about general game-playing. The paper provides complexity results for the satisfiability problems of both DLALogic and GDL as well as a polynomial embedding of GDL into DLA

    From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought

    Full text link
    How does language inform our downstream thinking? In particular, how do humans make meaning from language -- and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose \textit{rational meaning construction}, a computational framework for language-informed thinking that combines neural models of language with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a \textit{probabilistic language of thought} (PLoT) -- a general-purpose symbolic substrate for probabilistic, generative world modeling. Our architecture integrates two powerful computational tools that have not previously come together: we model thinking with \textit{probabilistic programs}, an expressive representation for flexible commonsense reasoning; and we model meaning construction with \textit{large language models} (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework in action through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning about agents and their plans. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves

    An Audit Logic for Accountability

    Get PDF
    We describe and implement a policy language. In our system, agents can distribute data along with usage policies in a decentralized architecture. Our language supports the specification of conditions and obligations, and also the possibility to refine policies. In our framework, the compliance with usage policies is not actively enforced. However, agents are accountable for their actions, and may be audited by an authority requiring justifications.Comment: To appear in Proceedings of IEEE Policy 200

    Relational Representations in Reinforcement Learning: Review and Open Problems

    Get PDF
    This paper is about representation in RL.We discuss some of the concepts in representation and generalization in reinforcement learning and argue for higher-order representations, instead of the commonly used propositional representations. The paper contains a small review of current reinforcement learning systems using higher-order representations, followed by a brief discussion. The paper ends with research directions and open problems.\u

    Programming in logic without logic programming

    Get PDF
    In previous work, we proposed a logic-based framework in which computation is the execution of actions in an attempt to make reactive rules of the form if antecedent then consequent true in a canonical model of a logic program determined by an initial state, sequence of events, and the resulting sequence of subsequent states. In this model-theoretic semantics, reactive rules are the driving force, and logic programs play only a supporting role. In the canonical model, states, actions and other events are represented with timestamps. But in the operational semantics, for the sake of efficiency, timestamps are omitted and only the current state is maintained. State transitions are performed reactively by executing actions to make the consequents of rules true whenever the antecedents become true. This operational semantics is sound, but incomplete. It cannot make reactive rules true by preventing their antecedents from becoming true, or by proactively making their consequents true before their antecedents become true. In this paper, we characterize the notion of reactive model, and prove that the operational semantics can generate all and only such models. In order to focus on the main issues, we omit the logic programming component of the framework.Comment: Under consideration in Theory and Practice of Logic Programming (TPLP

    Modelling Learning as Modelling

    Get PDF
    Economists tend to represent learning as a procedure for estimating the parameters of the "correct" econometric model. We extend this approach by assuming that agents specify as well as estimate models. Learning thus takes the form of a dynamic process of developing models using an internal language of representation where expectations are formed by forecasting with the best current model. This introduces a distinction between the form and content of the internal models which is particularly relevant for boundedly rational agents. We propose a framework for such model development which use a combination of measures: the error with respect to past data, the complexity of the model, the cost of finding the model and a measure of the model's specificity The agent has to make various trade-offs between them. A utility learning agent is given as an example

    SAsSy – Scrutable Autonomous Systems

    Get PDF
    Abstract. An autonomous system consists of physical or virtual systems that can perform tasks without continuous human guidance. Autonomous systems are becoming increasingly ubiquitous, ranging from unmanned vehicles, to robotic surgery devices, to virtual agents which collate and process information on the internet. Existing autonomous systems are opaque, limiting their usefulness in many situations. In order to realise their promise, techniques for making such autonomous systems scrutable are therefore required. We believe that the creation of such scrutable autonomous systems rests on four foundations, namely an appropriate planning representation; the use of a human understandable reasoning mechanism, such as argumentation theory; appropriate natural language generation tools to translate logical statements into natural ones; and information presentation techniques to enable the user to cope with the deluge of information that autonomous systems can provide. Each of these foundations has its own unique challenges, as does the integration of all of these into a single system.
    corecore