1,429 research outputs found

    Extending BDI plan selection to incorporate learning from experience

    Get PDF
    An important drawback to the popular Belief, Desire, and Intentions (BDI) paradigm is that such systems include no element of learning from experience. We describe a novel BDI execution framework that models context conditions as decision trees, rather than boolean formulae, allowing agents to learn the probability of success for plans based on experience. By using a probabilistic plan selection function, the agents can balance exploration and exploitation of their plans. We extend earlier work to include both parameterised goals and recursion and modify our previous approach to decision tree confidence to include large and even non-finite domains that arise from such consideration. Our evaluation on a pre-existing program that relies heavily on recursion and parametrised goals confirms previous results that naive learning fails in some circumstances, and demonstrates that the improved approach learns relatively well

    Towards Verifiably Ethical Robot Behaviour

    Full text link
    Ensuring that autonomous systems work ethically is both complex and difficult. However, the idea of having an additional `governor' that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a `consequence engine' that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent verification approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.Comment: Presented at the 1st International Workshop on AI and Ethics, Sunday 25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the workshop proceedings published by AAA

    Probabilistic Perception Revision in AgentSpeak(L)

    Get PDF
    Agent programming is mostly a symbolic discipline and, as such, draws little benefits from probabilistic areas as machine learning and graphical models. However, the greatest objective of agent research is the achievement of autonomy in dynamical and complex environments — a goal that implies embracing uncertainty and therefore the entailed representations, algorithms and techniques. This paper proposes an innovative and conflict free two layer approach to agent programming that uses already established methods and tools from both symbolic and probabilistic artificial intelligence. Moreover, this method is illustrated by means of a widely used agent programming example, GOLDMINERS

    Managing different sources of uncertainty in a BDI framework in a principled way with tractable fragments

    Get PDF
    The Belief-Desire-Intention (BDI) architecture is a practical approach for modelling large-scale intelligent systems. In the BDI setting, a complex system is represented as a network of interacting agents – or components – each one modelled based on its beliefs, desires and intentions. However, current BDI implementations are not well-suited for modelling more realistic intelligent systems which operate in environments pervaded by different types of uncertainty. Furthermore, existing approaches for dealing with uncertainty typically do not offer syntactical or tractable ways of reasoning about uncertainty. This complicates their integration with BDI implementations, which heavily rely on fast and reactive decisions. In this paper, we advance the state-of-the-art w.r.t. handling different types of uncertainty in BDI agents. The contributions of this paper are, first, a new way of modelling the beliefs of an agent as a set of epistemic states. Each epistemic state can use a distinct underlying uncertainty theory and revision strategy, and commensurability between epistemic states is achieved through a stratification approach. Second, we present a novel syntactic approach to revising beliefs given unreliable input. We prove that this syntactic approach agrees with the semantic definition, and we identify expressive fragments that are particularly useful for resource-bounded agents. Third, we introduce full operational semantics that extend Can, a popular semantics for BDI, to establish how reasoning about uncertainty can be tightly integrated into the BDI framework. Fourth, we provide comprehensive experimental results to highlight the usefulness and feasibility of our approach, and explain how the generic epistemic state can be instantiated into various representations
    • …
    corecore