8 research outputs found

    Do agents dream of abiding by the rules?:Learning norms via behavioral exploration and sparse human supervision

    Get PDF
    In recent years, several normative systems have been presented in the literature. Relying on formal methods, these systems support the encoding of legal rules into machine-readable formats, enabling, e.g. to check whether a certain workflow satisfies or agents abide by these rules. However, not all rules can be easily expressed (see for instance the unclear boundaries between tax planning and tax avoidance). The paper introduces a framework for norm identification and norm induction that automates the formalization of norms about non-compliant behavior by exploring the behavioral space via simulation, and integrating inputs from humans via active learning. The proposed problem formulation sets also a bridge between AI &amp; law and more general branches of AI concerned by the adaptation of artificial agents to human directives.</p

    Declarative Preferences in Reactive BDI Agents

    No full text
    Current agent architectures implementing the belief-desire-intention (BDI) model consider agents which respond reactively to internal and external events by selecting the first-available plan. Priority between plans is hard-coded in the program, and so the reasons why a certain plan is preferred remain in the programmer’s mind. Recent works that attempt to include explicit preferences in BDI agents treat preferences essentially as a rationale for planning tasks to be performed at run-time, thus disrupting the reactive nature of agents. In this paper we propose a method to include declarative preferences (i.e. concerning states of affairs) in the agent program, and to use them in a manner that preserves reactivity. To achieve this, the plan prioritization step is performed offline, by (a) generating all possible outcomes of situated plan executions, (b) selecting a relevant subset of situation/outcomes couplings as representative summary for each plan, (c) sorting the plans by evaluating summaries through the agent’s preferences. The task of generating outcomes in several conditions is performed by translating the agent’s procedural knowledge to an ASP program using discrete-event calculus

    Integrating CP-Nets in Reactive BDI Agents

    No full text
    Computational agents based upon the belief-desire-intention (BDI) architecture generally use reactive rules to trigger the execution of plans. For various reasons, certain plans might be preferred over others at design time. Most BDI agents platforms use hard-coding these preferences in some form of the static ordering of the reactive rules, but keeping the preferential structure implicit limits script reuse and generalization. This paper proposes an approach to add qualitative preferences over adoption/avoidance of procedural goals into an agent script, building upon the well-known notation of conditional ceteris paribus preference networks (CP-nets). For effective execution, the procedural knowledge and the preferential structure of the agent are mapped in an off-line fashion into a new reactive agent script. This solution contrasts with recent proposals integrating preferences as a rationale in the decision making cycle, and so overriding the reactive nature of BDI agents
    corecore