471 research outputs found

    Observation-based Model for BDI-Agents

    Get PDF
    We present a new computational model of BDI-agents, called the observation-based BDI-model. The key point of this BDI-model is to express agents' beliefs, desires and intentions as a set of runs (computing paths), which is exactly a system in the interpreted system model, a well-known agent model due to Halpern and his colleagues. Our BDI-model is computationally grounded in that we are able to associate the BDI-agent model with a computer program, and formulas, involving agents' beliefs, desires (goals) and intentions, can be understood as properties of program computations. We present a sound and complete proof system with respect to our BDI-model and explore how symbolic model checking techniques can be applied to model checking BDI-agents. In order to make our BDI-model more flexible and practically realistic, we generalize it so that agents can have multiple sources of beliefs, goals and intentions

    A canonical theory of dynamic decision-making

    Get PDF
    Decision-making behavior is studied in many very different fields, from medicine and eco- nomics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI, and other technical disciplines. However the conceptual- ization of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision-maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision-making with respect to other high-level cognitive capabilities like problem solving, planning, and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuropsychology, artificial intelligence, and decision engineering

    The Persuasive Tutor: a BDI Teaching Agent with Role and Reference Grammar Language Interface ā€“ Sustainable design of a conversational agent for language learning

    Get PDF
    This paper investigates how an intelligent teaching agent with Role and Reference Grammar [RRG] (cf. Van Valin 2005) as linguistic engine can support language learning. Based on a user-centred empirical design study the architecture of a highly persuasive tool for language learning as an extension of PLOTLearner (http://europlot.blogspot.dk/2012/07/try-plotlearner-2.html) is developed. Based on grounded theory it is shown that feedback and support is of greatest importance even in self-directed computer assisted language learning. Is also shown how this overall approach to language learning can be situated into traditional conversation based learning theories (cf. Laurillard 2009). It is shown that a computationally adequate model of the RRG-linking algorithm, extended into a computational processing model, can account for communication between a learner and the software by employing conceptual graphs to represent mental states in the software agent and the important role of speech acts is emphasized in this context

    CernoCAMAL : a probabilistic computational cognitive architecture

    Get PDF
    This thesis presents one possible way to develop a computational cognitive architecture, dubbed CernoCAMAL, that can be used to govern artificial minds probabilistically. The primary aim of the CernoCAMAL research project is to investigate how its predecessor architecture CAMAL can be extended to reason probabilistically about domain model objects through perception, and how the probability formalism can be integrated into its BDI (Belief-Desire-Intention) model to coalesce a number of mechanisms and processes. The motivation and impetus for extending CAMAL and developing CernoCAMAL is the considerable evidence that probabilistic thinking and reasoning is linked to cognitive development and plays a role in cognitive functions, such as decision making and learning. This leads us to believe that a probabilistic reasoning capability is an essential part of human intelligence. Thus, it should be a vital part of any system that attempts to emulate human intelligence computationally. The extensions and augmentations to CAMAL, which are the main contributions of the CernoCAMAL research project, are as follows: - The integration of the EBS (Extended Belief Structure) that associates a probability value with every belief statement, in order to represent the degrees of belief numerically. - The inclusion of the CPR (CernoCAMAL Probabilistic Reasoner) that reasons probabilistically over the goal- and task-oriented perceptual feedback generated by reactive sub-systems. - The compatibility of the probabilistic BDI model with the affect and motivational models and affective and motivational valences used throughout CernoCAMAL. A succession of experiments in simulation and robotic testbeds is carried out to demonstrate improvements and increased efficacy in CernoCAMALā€™s overall cognitive performance. A discussion and critical appraisal of the experimental results, together with a summary, a number of potential future research directions, and some closing remarks conclude the thesis

    CernoCAMAL : a probabilistic computational cognitive architecture

    Get PDF
    This thesis presents one possible way to develop a computational cognitive architecture, dubbed CernoCAMAL, that can be used to govern artificial minds probabilistically. The primary aim of the CernoCAMAL research project is to investigate how its predecessor architecture CAMAL can be extended to reason probabilistically about domain model objects through perception, and how the probability formalism can be integrated into its BDI (Belief-Desire-Intention) model to coalesce a number of mechanisms and processes.The motivation and impetus for extending CAMAL and developing CernoCAMAL is the considerable evidence that probabilistic thinking and reasoning is linked to cognitive development and plays a role in cognitive functions, such as decision making and learning. This leads us to believe that a probabilistic reasoning capability is an essential part of human intelligence. Thus, it should be a vital part of any system that attempts to emulate human intelligence computationally.The extensions and augmentations to CAMAL, which are the main contributions of the CernoCAMAL research project, are as follows:- The integration of the EBS (Extended Belief Structure) that associates a probability value with every belief statement, in order to represent the degrees of belief numerically.- The inclusion of the CPR (CernoCAMAL Probabilistic Reasoner) that reasons probabilistically over the goal- and task-oriented perceptual feedback generated by reactive sub-systems.- The compatibility of the probabilistic BDI model with the affect and motivational models and affective and motivational valences used throughout CernoCAMAL.A succession of experiments in simulation and robotic testbeds is carried out to demonstrate improvements and increased efficacy in CernoCAMALā€™s overall cognitive performance. A discussion and critical appraisal of the experimental results, together with a summary, a number of potential future research directions, and some closing remarks conclude the thesis

    SAsSy ā€“ Scrutable Autonomous Systems

    Get PDF
    Abstract. An autonomous system consists of physical or virtual systems that can perform tasks without continuous human guidance. Autonomous systems are becoming increasingly ubiquitous, ranging from unmanned vehicles, to robotic surgery devices, to virtual agents which collate and process information on the internet. Existing autonomous systems are opaque, limiting their usefulness in many situations. In order to realise their promise, techniques for making such autonomous systems scrutable are therefore required. We believe that the creation of such scrutable autonomous systems rests on four foundations, namely an appropriate planning representation; the use of a human understandable reasoning mechanism, such as argumentation theory; appropriate natural language generation tools to translate logical statements into natural ones; and information presentation techniques to enable the user to cope with the deluge of information that autonomous systems can provide. Each of these foundations has its own unique challenges, as does the integration of all of these into a single system.

    Progression and Verification of Situation Calculus Agents with Bounded Beliefs

    Get PDF
    We investigate agents that have incomplete information and make decisions based on their beliefs expressed as situation calculus bounded action theories. Such theories have an infinite object domain, but the number of objects that belong to fluents at each time point is bounded by a given constant. Recently, it has been shown that verifying temporal properties over such theories is decidable. We take a first-person view and use the theory to capture what the agent believes about the domain of interest and the actions affecting it. In this paper, we study verification of temporal properties over online executions. These are executions resulting from agents performing only actions that are feasible according to their beliefs. To do so, we first examine progression, which captures belief state update resulting from actions in the situation calculus. We show that, for bounded action theories, progression, and hence belief states, can always be represented as a bounded first-order logic theory. Then, based on this result, we prove decidability of temporal verification over online executions for bounded action theories. Ā© 2015 The Author(s

    From Affect Theoretical Foundations to Computational Models of Intelligent Affective Agents

    Full text link
    [EN] The links between emotions and rationality have been extensively studied and discussed. Several computational approaches have also been proposed to model these links. However, is it possible to build generic computational approaches and languages so that they can be "adapted " when a specific affective phenomenon is being modeled? Would these approaches be sufficiently and properly grounded? In this work, we want to provide the means for the development of these generic approaches and languages by making a horizontal analysis inspired by philosophical and psychological theories of the main affective phenomena that are traditionally studied. Unfortunately, not all the affective theories can be adapted to be used in computational models; therefore, it is necessary to perform an analysis of the most suitable theories. In this analysis, we identify and classify the main processes and concepts which can be used in a generic affective computational model, and we propose a theoretical framework that includes all these processes and concepts that a model of an affective agent with practical reasoning could use. Our generic theoretical framework supports incremental research whereby future proposals can improve previous ones. This framework also supports the evaluation of the coverage of current computational approaches according to the processes that are modeled and according to the integration of practical reasoning and affect-related issues. This framework is being used in the development of the GenIA(3) architecture.This work is partially supported by the Spanish Government projects PID2020-113416RB-I00, GVA-CEICE project PROMETEO/2018/002, and TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215.Alfonso, B.; Taverner-Aparicio, JJ.; Vivancos, E.; Botti, V. (2021). From Affect Theoretical Foundations to Computational Models of Intelligent Affective Agents. Applied Sciences. 11(22):1-29. https://doi.org/10.3390/app112210874S129112

    A computationally grounded, weighted doxastic logic

    Get PDF
    Modelling, reasoning and verifying complex situations involving a system of agents is crucial in all phases of the development of a number of safety-critical systems. In particular, it is of fundamental importance to have tools and techniques to reason about the doxastic and epistemic states of agents, to make sure that the agents behave as intended. In this paper we introduce a computationally grounded logic called COGWED and we present two types of semantics that support a range of practical situations. We provide model checking algorithms, complexity characterisations and a prototype implementation. We validate our proposal against a case study from the avionic domain: we assess and verify the situational awareness of pilots flying an aircraft with several automated components in off-nominal conditions

    From raw data to agent perceptions for simulation, verification, and monitoring

    Get PDF
    In this paper we present a practical solution to the problem of connecting ā€œreal worldā€ data exchanged between sensors and actuators with the higher level of abstraction used in frameworks for multiagent systems. In particular, we show how to connect an industry-standard publish-subscribe communication protocol for embedded systems called MQTT with two Belief-Desire-Intention agent modelling and programming languages: Jason/AgentSpeak and Brahms. In the paper we describe the details of our Java implementation and we release all the code open source
    • ā€¦
    corecore