10 research outputs found

    Intrinsic Rewards for Maintenance, Approach, Avoidance and Achievement Goal Types

    Get PDF
    In reinforcement learning, reward is used to guide the learning process. The reward is often designed to be task-dependent, and it may require significant domain knowledge to design a good reward function. This paper proposes general reward functions for maintenance, approach, avoidance, and achievement goal types. These reward functions exploit the inherent property of each type of goal and are thus task-independent. We also propose metrics to measure an agent's performance for learning each type of goal. We evaluate the intrinsic reward functions in a framework that can autonomously generate goals and learn solutions to those goals using a standard reinforcement learning algorithm. We show empirically how the proposed reward functions lead to learning in a mobile robot application. Finally, using the proposed reward functions as building blocks, we demonstrate how compound reward functions, reward functions to generate sequences of tasks, can be created that allow the mobile robot to learn more complex behaviors

    CAMP-BDI: A Pre-emptive Approach for Plan Execution Robustness in Multiagent Systems

    Get PDF
    Abstract. Belief-Desire-Intention agents in realistic environments may face un-predictable exogenous changes threatening intended plans and debilitative failure effects that threaten reactive recovery. In this paper we present the CAMP-BDI (Capability Aware, Maintaining Plans) approach, where BDI agents utilize intro-spective reasoning to modify intended plans in avoidance of anticipated failure. We also describe an extension of this approach to the distributed case, using a de-centralized process driven by structured messaging. Our results show significant improvements in goal achievement over a reactive failure recovery mechanism in a stochastic environment with debilitative failure effects, and suggest CAMP-BDI offers a valuable complementary approach towards agent robustness.

    Maintenance goals in intelligent agents

    Get PDF
    One popular software development strategy is that of intelligent agent systems. Agents are often programmed by goals; a programmer or user defines a set of goals for an agent, and then the agent is left to determine how best to complete the goals assigned to them. Popular types of goals are achievement and maintenance goals. An achievement goal describes some particular state the agent would like to bring about, for example, being in a particular location or having a particular bank balance. Given an achievement goal, an agent will perform actions that it believes will lead it to having the achievement goal realised. In current agent systems, maintenance goals tell an agent to ensure that some condition is always kept satisfied, for example, ensuring that a vehicle stays below a certain speed, or that it has sufficient fuel in its fuel tank. Currently, maintenance goals are reactive, in that they are not considered until after the maintenance condition has been violated. Only then does the agent begin to perform actions to restore the maintenance condition. In this thesis, we have discussed methods by which maintenance goals can be made proactive. Proactive maintenance goals may cause an agent to perform actions before a maintenance condition is violated, when it can predict that a maintenance condition will be violated in the future. This can be due to changes to the environment, or more interestingly, when the agent itself is performing actions that will cause the violation of the maintenance condition. Operational semantics that clearly demonstrate the functionality and operation of proactive maintenance goals have been developed in this thesis. We have experimentally shown that agents with proactive maintenance goals will reduce the amount of resources consumed in a variety of error-prone environments. This includes scenarios where the agent's beliefs are less than the true values, as well as when the beliefs are in excess of the true values

    GROVE: A computationally grounded model for rational intention revision in BDI agents

    Get PDF
    A fundamental aspect of Belief-Desire-Intention (BDI) agents is intention revision. Agents revise their intentions in order to maintain consistency between their intentions and beliefs, and consistency between intentions. A rational agent must also account for the optimality of their intentions in the case of revision. To that end I present GROVE, a model of rational intention revision for BDI agents. The semantics of a GROVE agent is defined in terms of constraints and preferences on possible future executions of an agent’s plans. I show that GROVE is weakly rational in the sense of Grant et al. and imposes more constraints on executions than the operational semantics for goal lifecycles proposed by Harland et al. As it may not be computationally feasible to consider all possible future executions, I propose a bounded version of GROVE that samples the set of future executions, and state conditions under which bounded GROVE commits to a rational execution

    GROVE: A computationally grounded model for rational intention revision in BDI agents

    Get PDF
    A fundamental aspect of Belief-Desire-Intention (BDI) agents is intention revision. Agents revise their intentions in order to maintain consistency between their intentions and beliefs, and consistency between intentions. A rational agent must also account for the optimality of their intentions in the case of revision. To that end I present GROVE, a model of rational intention revision for BDI agents. The semantics of a GROVE agent is defined in terms of constraints and preferences on possible future executions of an agent’s plans. I show that GROVE is weakly rational in the sense of Grant et al. and imposes more constraints on executions than the operational semantics for goal lifecycles proposed by Harland et al. As it may not be computationally feasible to consider all possible future executions, I propose a bounded version of GROVE that samples the set of future executions, and state conditions under which bounded GROVE commits to a rational execution

    ABSTRACT On Proactivity and Maintenance Goals

    No full text
    Goals are an important concept in intelligent agent systems, and can take a variety of forms. One such form is maintenance goals, which, unlike achievement goals, define states that must remain true, rather than a state that is to be achieved. Maintenance goals are generally restricted to acting as trigger conditions for goals or plans, and often take no part in any deliberation process. These goals are reactive and are only acted upon when the maintenance conditions are no longer true. In this paper, we study maintenance goals that are proactive, in that the agent system needs to not only react when the maintenance conditions fail, but also anticipate the failures of these conditions, and act in order to avoid them failing. This can be done by performing actions that prevent the condition from failing, or suspending goals that will cause the maintenance conditions to fail. We provide a representation for maintenance goals that captures both their reactive and proactive aspects, algorithms that identify in advance where maintenance conditions may not hold, and mechanisms for enabling preventative actions in such situations. We also provide some experimental results on an implementation of these ideas
    corecore