4 research outputs found

    Preference-Based Goal Refinement in BDI Agents

    No full text
    Computational agents based on the BDI framework typically rely on abstract plans and plan refinement to reach a degree of autonomy in dynamic environments: agents are provided with the ability to select how-to achieve their goals by choosing from a set of options. In this work we focus on a related, yet under-studied feature: abstract goals. These constructs refer to the ability of agents to adopt goals that are not fully grounded at the moment of invocation, refining them only when and where needed: the ability to select what-to (concretely) achieve at run-time. We present a preference-based approach to goal refinement, defining preferences based on extended Ceteris Paribus Networks (CP-Nets) for an AgentSpeak(L)-like agent programming language, and mapping the established CP-Nets logic and algorithms to guide the goal refinement step. As a technical contribution, we present an implementation of this method that solely uses a Prolog-like inference engine of the agent's belief-base to reason about preferences, thus minimally affecting the decision-making mechanisms hard-coded in the agent framework

    Preference-Based Goal Refinement in BDI Agents

    Get PDF
    Computational agents based on the BDI framework typically rely on abstract plans and plan refinement to reach a degree of autonomy in dynamic environments: agents are provided with the ability to select how-to achieve their goals by choosing from a set of options. In this work we focus on a related, yet under-studied feature: abstract goals. These constructs refer to the ability of agents to adopt goals that are not fully grounded at the moment of invocation, refining them only when and where needed: the ability to select what-to (concretely) achieve at run-time. We present a preference-based approach to goal refinement, defining preferences based on extended Ceteris Paribus Networks (CP-Nets) for an AgentSpeak(L)-like agent programming language, and mapping the established CP-Nets logic and algorithms to guide the goal refinement step. As a technical contribution, we present an implementation of this method that solely uses a Prolog-like inference engine of the agent's belief-base to reason about preferences, thus minimally affecting the decision-making mechanisms hard-coded in the agent framework
    corecore