1 research outputs found
AI for Explaining Decisions in Multi-Agent Environments
Explanation is necessary for humans to understand and accept decisions made
by an AI system when the system's goal is known. It is even more important when
the AI system makes decisions in multi-agent environments where the human does
not know the systems' goals since they may depend on other agents' preferences.
In such situations, explanations should aim to increase user satisfaction,
taking into account the system's decision, the user's and the other agents'
preferences, the environment settings and properties such as fairness, envy and
privacy. Generating explanations that will increase user satisfaction is very
challenging; to this end, we propose a new research direction: xMASE. We then
review the state of the art and discuss research directions towards efficient
methodologies and algorithms for generating explanations that will increase
users' satisfaction from AI system's decisions in multi-agent environments.Comment: This paper has been submitted to the Blue Sky Track of the AAAI 2020
conference. At the time of submission, it is under review. The tentative
notification date will be November 10, 2019. Current version: Name of first
author had been added in metadat