4 research outputs found

    Culture-Based Explainable Human-Agent Deconfliction

    Get PDF
    Law codes and regulations help organise societies for centuries, and as AI systems gain more autonomy, we question how human-agent systems can operate as peers under the same norms, especially when resources are contended. We posit that agents must be accountable and explainable by referring to which rules justify their decisions. The need for explanations is associated with user acceptance and trust. This paper's contribution is twofold: i) we propose an argumentation-based human-agent architecture to map human regulations into a culture for artificial agents with explainable behaviour. Our architecture leans on the notion of argumentative dialogues and generates explanations from the history of such dialogues; and ii) we validate our architecture with a user study in the context of human-agent path deconfliction. Our results show that explanations provide a significantly higher improvement in human performance when systems are more complex. Consequently, we argue that the criteria defining the need of explanations should also consider the complexity of a system. Qualitative findings show that when rules are more complex, explanations significantly reduce the perception of challenge for humans.L3Harris ASV and the Royal Commission for the Exhibition of 185

    Instantiating metalevel argumentation frameworks

    Get PDF
    We directly instantiate metalevel argumentation frameworks (MAFs) to enable argumentation-based reasoning about information relevant to various applications. The advantage of this is that information that typically cannot be incorporated via the instantiation of object-level argumentation frameworks can now be incorporated, in particular information referencing (1) preferences over arguments, (2) the rationale for attacks, and (3) the dialectical effect of critical questions that shifts the burden of proof when posed. We achieve this by using a variant of ASPIC+ and a higher-order typed language that can reference object-level formulae and arguments. We illustrate these representational advantages with a running example from clinical decision support
    corecore