4 research outputs found

    Goal Formation through Interaction in the Situation Calculus: A Formal Account Grounded in Behavioral Science

    Get PDF
    Goal reasoning has been attracting much attention in AI recently. Here, we consider how an agent changes its goals as a result of interaction with humans and peers. In particular, we draw upon a model developed in Behavioral Science, the Elementary Pragmatic Model (EPM). We show how the EPM principles can be incorporated into a sophisticated theory of goal change based on the Situation Calculus. The resulting logical theory supports agents with a wide variety of relational styles, including some that we may consider irrational or creative. This lays the foundations for building autonomous agents that interact with humans in a rich and realistic way, as required by advanced Human-AI collaboration applications

    Argumentation-based Reasoning about Plans, Maintenance Goals and Norms

    Get PDF
    Peer reviewedPostprin

    Learning plan selection for BDI agent systems

    Get PDF
    Belief-Desire-Intention (BDI) is a popular agent-oriented programming approach for developing robust computer programs that operate in dynamic environments. These programs contain pre-programmed abstract procedures that capture domain know-how, and work by dynamically applying these procedures, or plans, to different situations that they encounter. Agent programs built using the BDI paradigm, however, do not traditionally do learning, which becomes important if a deployed agent is to be able to adapt to changing situations over time. Our vision is to allow programming of agent systems that are capable of adjusting to ongoing changes in the environment’s dynamics in a robust and effective manner. To this end, in this thesis we develop a framework that can be used by programmers to build adaptable BDI agents that can improve plan selection over time by learning from their experiences. These learning agents can dynamically adjust their choice of which plan to select in which situation, based on a growing understanding of what works and a sense of how reliable this understanding is. This reliability is given by a perceived measure of confidence, that tries to capture how well-informed the agent’s most recent decisions were and how well it knows the most recent situations that it encountered. An important focus of this work is to make this approach practical. Our framework allows learning to be integrated into BDI programs of reasonable complexity, including those that use recursion and failure recovery mechanisms. We show the usability of the framework in two complete programs: an implementation of the Towers of Hanoi game where recursive solutions must be learnt, and a modular battery system controller where the environment dynamics changes in ways that may require many learning and relearning phases
    corecore