28 research outputs found

    Planning on demand in BDI systems

    Get PDF
    The primary goals and contributions of our work are: 1)incorporating planning at specific points in a BDI application,on an as needed basis, under control of the programmer;2) planning using only limited subsets of the application,making the planning more efficient, and; 3) incorporating the plan generated back into the BDI system, for regular BDI execution, identifying plan steps that could be pursued in parallel

    Planning in BDI agent systems

    Get PDF
     Belief-Desire-Intention (BDI) agent systems are a popular approach to developing agents for complex and dynamic environments. These agents rely on context sensitive expansion of plans, acting as they go, and consequently, they do not incorporate a generic mechanism to do any kind of “look-ahead” or offline planning. This is useful when, for instance, important resources may be consumed by executing steps that are not necessary for a goal; steps are not reversible and may lead to situations in which a goal cannot be solved; and side effects of steps are undesirable if they are not useful for a goal. In this thesis, we incorporate planning techniques into BDI systems. First, we provide a general mechanism for performing “look-ahead” planning, using Hierarchical Task Network (HTN) planning techniques, so that an agent may guide its selection of plans for the purpose of avoiding negative interactions between them. Unlike past work on adding such planning into BDI agents, which do so only at the implementation level without any precise semantics, we provide a solid theoretical basis for such planning. Second, we incorporate first principles planning into BDI systems, so that new plans may be created for achieving goals. Unlike past work, which focuses on creating low-level plans, losing much of the domain knowledge encoded in BDI agents, we introduce a novel technique where plans are created by respecting and reusing the procedural domain knowledge encoded in such agents; our abstract plans can be executed in the standard BDI engine using this knowledge. Furthermore, we recognise an intrinsic tension between striving for abstract plans and, at the same time, ensuring that unnecessary actions, unrelated to the specific goal to be achieved, are avoided. To explore this tension, we characterise the set of “ideal” abstract plans that are non-redundant while maximally abstract, and then develop a more limited but feasible account where an abstract plan is “specialised” into a plan that is non-redundant and as abstract as possible. We present theoretical properties of the planning frameworks, as well as insights into their practical utility

    An operational semantics for a fragment of PRS

    Get PDF
    The Procedural Reasoning System (PRS) is arguably the ïŹrst implementation of the Belief–Desire–Intention (BDI) approach to agent programming. PRS remains extremely inïŹ‚uential, directly or indirectly inspiring the development of subsequent BDI agent programming languages. However, perhaps surprisingly given its centrality in the BDI paradigm, PRS lacks a formal operational semantics, making it difïŹcult to determine its expressive power relative to other agent programming languages. This paper takes a ïŹrst step towards closing this gap, by giving a formal semantics for a signiïŹcant fragment of PRS. We prove key properties of the semantics relating to PRS-speciïŹc programming constructs, and show that even the fragment of PRS we consider is strictly more expressive than the plan constructs found in typical BDI languages

    Raffinement des intentions

    Get PDF
    Le résumé en français n'a pas été communiqué par l'auteur.Le résumé en anglais n'a pas été communiqué par l'auteur

    Learning plan selection for BDI agent systems

    Get PDF
    Belief-Desire-Intention (BDI) is a popular agent-oriented programming approach for developing robust computer programs that operate in dynamic environments. These programs contain pre-programmed abstract procedures that capture domain know-how, and work by dynamically applying these procedures, or plans, to different situations that they encounter. Agent programs built using the BDI paradigm, however, do not traditionally do learning, which becomes important if a deployed agent is to be able to adapt to changing situations over time. Our vision is to allow programming of agent systems that are capable of adjusting to ongoing changes in the environment’s dynamics in a robust and effective manner. To this end, in this thesis we develop a framework that can be used by programmers to build adaptable BDI agents that can improve plan selection over time by learning from their experiences. These learning agents can dynamically adjust their choice of which plan to select in which situation, based on a growing understanding of what works and a sense of how reliable this understanding is. This reliability is given by a perceived measure of confidence, that tries to capture how well-informed the agent’s most recent decisions were and how well it knows the most recent situations that it encountered. An important focus of this work is to make this approach practical. Our framework allows learning to be integrated into BDI programs of reasonable complexity, including those that use recursion and failure recovery mechanisms. We show the usability of the framework in two complete programs: an implementation of the Towers of Hanoi game where recursive solutions must be learnt, and a modular battery system controller where the environment dynamics changes in ways that may require many learning and relearning phases

    CAMP-BDI: an approach for multiagent systems robustness through capability-aware agents maintaining plans

    Get PDF
    Rational agent behaviour is frequently achieved through the use of plans, particularly within the widely used BDI (Belief-Desire-Intention) model for intelligent agents. As a consequence, preventing or handling failure of planned activity is a vital component in building robust multiagent systems; this is especially true in realistic environments, where unpredictable exogenous change during plan execution may threaten intended activities. Although reactive approaches can be employed to respond to activity failure through replanning or plan-repair, failure may have debilitative effects that act to stymie recovery and, potentially, hinder subsequent activity. A further factor is that BDI agents typically employ deterministic world and plan models, as probabilistic planning methods are typical intractable in realistically complex environments. However, deterministic operator preconditions may fail to represent world states which increase the risk of activity failure. The primary contribution of this thesis is the algorithmic design of the CAMP-BDI (Capability Aware, Maintaining Plans) approach; a modification of the BDI reasoning cycle which provides agents with beliefs and introspective reasoning to anticipate increased risk of failure and pro-actively modify intended plans in response. We define a capability meta-knowledge model, providing information to identify and address threats to activity success using precondition modelling and quantitative quality estimation. This also facilitates semantic-independent communication of capability information for general advertisement and of dependency information - we define use of the latter, within a structured messaging approach, to extend local agent algorithms towards decentralized, distributed robustness. Finally, we define a policy based approach for dynamic modification of maintenance behaviour, allowing response to observations made during runtime and with potential to improve re-usability of agents in alternate environments. An implementation of CAMP-BDI is compared against an equivalent reactive system through experimentation in multiple perturbation configurations, using a logistics domain. Our empirical evaluation indicates CAMP-BDI has significant benefit if activity failure carries a strong risk of debilitative consequence

    A belief-desire-intention architechture with a logic-based planner for agents in stochastic domains

    Get PDF
    This dissertation investigates high-level decision making for agents that are both goal and utility driven. We develop a partially observable Markov decision process (POMDP) planner which is an extension of an agent programming language called DTGolog, itself an extension of the Golog language. Golog is based on a logic for reasoning about action—the situation calculus. A POMDP planner on its own cannot cope well with dynamically changing environments and complicated goals. This is exactly a strength of the belief-desire-intention (BDI) model: BDI theory has been developed to design agents that can select goals intelligently, dynamically abandon and adopt new goals, and yet commit to intentions for achieving goals. The contribution of this research is twofold: (1) developing a relational POMDP planner for cognitive robotics, (2) specifying a preliminary BDI architecture that can deal with stochasticity in action and perception, by employing the planner.ComputingM. Sc. (Computer Science
    corecore