1 research outputs found
Representation, Justification and Explanation in a Value Driven Agent: An Argumentation-Based Approach
Ethical and explainable artificial intelligence is an interdisciplinary
research area involving computer science, philosophy, logic, the social
sciences, etc. For an ethical autonomous system, the ability to justify and
explain its decision making is a crucial aspect of transparency and
trustworthiness. This paper takes a Value Driven Agent (VDA) as an example,
explicitly representing implicit knowledge of a machine learning-based
autonomous agent and using this formalism to justify and explain the decisions
of the agent. For this purpose, we introduce a novel formalism to describe the
intrinsic knowledge and solutions of a VDA in each situation. Based on this
formalism, we formulate an approach to justify and explain the decision-making
process of a VDA, in terms of a typical argumentation formalism,
Assumption-based Argumentation (ABA). As a result, a VDA in a given situation
is mapped onto an argumentation framework in which arguments are defined by the
notion of deduction. Justified actions with respect to semantics from
argumentation correspond to solutions of the VDA. The acceptance (rejection) of
arguments and their premises in the framework provides an explanation for why
an action was selected (or not). Furthermore, we go beyond the existing version
of VDA, considering not only practical reasoning, but also epistemic reasoning,
such that the inconsistency of knowledge of the VDA can be identified, handled
and explained.Comment: 24 pages, 6 figures, submitted to JASS