Continual Learning trains models on a stream of data, with the aim of
learning new information without forgetting previous knowledge. Given the
dynamic nature of such environments, explaining the predictions of these models
can be challenging. We study the behavior of SHAP values explanations in
Continual Learning and propose an evaluation protocol to robustly assess the
change of explanations in Class-Incremental scenarios. We observed that, while
Replay strategies enforce the stability of SHAP values in
feedforward/convolutional models, they are not able to do the same with
fully-trained recurrent models. We show that alternative recurrent approaches,
like randomized recurrent models, are more effective in keeping the
explanations stable over time.Comment: ESANN 2023, 6 pages, added link to cod