3 research outputs found

    Implementing Argumentation-enabled Empathic Agents

    Full text link
    In a previous publication, we introduced the core concepts of empathic agents as agents that use a combination of utility-based and rule-based approaches to resolve conflicts when interacting with other agents in their environment. In this work, we implement proof-of-concept prototypes of empathic agents with the multi-agent systems development framework Jason and apply argumentation theory to extend the previously introduced concepts to account for inconsistencies between the beliefs of different agents. We then analyze the feasibility of different admissible set-based argumentation semantics to resolve these inconsistencies. As a result of the analysis we identify the maximal ideal extension as the most feasible argumentation semantics for the problem in focus.Comment: Accepted for/presented at the 16th European Conference on Multi-Agent Systems (EUMAS 2018

    Implementing Argumentation-enabled Empathic Agents

    No full text
    In a previous publication, we introduced the core concepts of empathic agents as agents that use a combination of utility-based and rule-based approaches to resolve conflicts when interacting with other agents in their environment. In this work, we implement proof-of-concept prototypes of empathic agents with the multi-agent systems development framework Jason and apply argumentation theory to extend the previously introduced concepts to account for inconsistencies between the beliefs of different agents. We then analyze the feasibility of different admissible set-based argumentation semantics to resolve these inconsistencies. As a result of the analysis we identify the maximal ideal extension as the most feasible argumentation semantics for the problem in focus

    Implementing Argumentation-enabled Empathic Agents

    No full text
    In a previous publication, we introduced the core concepts of empathic agents as agents that use a combination of utility-based and rule-based approaches to resolve conflicts when interacting with other agents in their environment. In this work, we implement proof-of-concept prototypes of empathic agents with the multi-agent systems development framework Jason and apply argumentation theory to extend the previously introduced concepts to account for inconsistencies between the beliefs of different agents. We then analyze the feasibility of different admissible set-based argumentation semantics to resolve these inconsistencies. As a result of the analysis we identify the maximal ideal extension as the most feasible argumentation semantics for the problem in focus
    corecore