56 research outputs found

    AI for Explaining Decisions in Multi-Agent Environments

    Full text link
    Explanation is necessary for humans to understand and accept decisions made by an AI system when the system's goal is known. It is even more important when the AI system makes decisions in multi-agent environments where the human does not know the systems' goals since they may depend on other agents' preferences. In such situations, explanations should aim to increase user satisfaction, taking into account the system's decision, the user's and the other agents' preferences, the environment settings and properties such as fairness, envy and privacy. Generating explanations that will increase user satisfaction is very challenging; to this end, we propose a new research direction: xMASE. We then review the state of the art and discuss research directions towards efficient methodologies and algorithms for generating explanations that will increase users' satisfaction from AI system's decisions in multi-agent environments.Comment: This paper has been submitted to the Blue Sky Track of the AAAI 2020 conference. At the time of submission, it is under review. The tentative notification date will be November 10, 2019. Current version: Name of first author had been added in metadat

    Not just for humans: Explanation for agent-to-agent communication

    Get PDF
    Once precisely defined so as to include just the explanation\u2019s act, the notion of explanation should be regarded as a central notion in the engineering of intelligent system\u2014not just as an add-on to make them understandable to humans. Based on symbolic AI techniques to match intuitive and rational cognition, explanation should be exploited as a fundamental tool for inter-agent communication among heterogeneous agents in open multi-agent systems. More generally, explanation-ready agents should work as the basic components in the engineering of intelligent systems integrating both symbolic and sub-/non-symbolic AI techniques

    The right kind of explanation: Validity in automated hate speech detection

    Get PDF
    To quickly identify hate speech online, communication research offers a useful tool in the form of automatic content analysis. However, the combined methods of standardized manual content analysis and supervised text classification demand different quality criteria. This chapter shows that a more substantial examination of validity is necessary since models often learn on spurious correlations or biases, and researchers run the risk of drawing wrong inferences. To investigate the overlap of theoretical concepts with technological operationalization, explainability methods are evaluated to explain what a model has learned. These methods proved to be of limited use in testing the validity of a model when the generated explanations aim at sense-making rather than faithfulness to the model. The chapter ends with recommendations for further interdisciplinary development of automatic content analysis
    corecore