1 research outputs found
On the Semantic Interpretability of Artificial Intelligence Models
Artificial Intelligence models are becoming increasingly more powerful and
accurate, supporting or even replacing humans' decision making. But with
increased power and accuracy also comes higher complexity, making it hard for
users to understand how the model works and what the reasons behind its
predictions are. Humans must explain and justify their decisions, and so do the
AI models supporting them in this process, making semantic interpretability an
emerging field of study. In this work, we look at interpretability from a
broader point of view, going beyond the machine learning scope and covering
different AI fields such as distributional semantics and fuzzy logic, among
others. We examine and classify the models according to their nature and also
based on how they introduce interpretability features, analyzing how each
approach affects the final users and pointing to gaps that still need to be
addressed to provide more human-centered interpretability solutions.Comment: 17 pages, 4 figures. Submitted to AI Magazine on August, 201