The need for transparency of predictive systems based on Machine Learning
algorithms arises as a consequence of their ever-increasing proliferation in
the industry. Whenever black-box algorithmic predictions influence human
affairs, the inner workings of these algorithms should be scrutinised and their
decisions explained to the relevant stakeholders, including the system
engineers, the system's operators and the individuals whose case is being
decided. While a variety of interpretability and explainability methods is
available, none of them is a panacea that can satisfy all diverse expectations
and competing objectives that might be required by the parties involved. We
address this challenge in this paper by discussing the promises of Interactive
Machine Learning for improved transparency of black-box systems using the
example of contrastive explanations -- a state-of-the-art approach to
Interpretable Machine Learning.
Specifically, we show how to personalise counterfactual explanations by
interactively adjusting their conditional statements and extract additional
explanations by asking follow-up "What if?" questions. Our experience in
building, deploying and presenting this type of system allowed us to list
desired properties as well as potential limitations, which can be used to guide
the development of interactive explainers. While customising the medium of
interaction, i.e., the user interface comprising of various communication
channels, may give an impression of personalisation, we argue that adjusting
the explanation itself and its content is more important. To this end,
properties such as breadth, scope, context, purpose and target of the
explanation have to be considered, in addition to explicitly informing the
explainee about its limitations and caveats...Comment: Published in the Kunstliche Intelligenz journal, special issue on
Challenges in Interactive Machine Learnin