The increase of legislative concerns towards the usage of Artificial
Intelligence (AI) has recently led to a series of regulations striving for a
more transparent, trustworthy and accountable AI. Along with these proposals,
the field of Explainable AI (XAI) has seen a rapid growth but the usage of its
techniques has at times led to unexpected results. The robustness of the
approaches is, in fact, a key property often overlooked: it is necessary to
evaluate the stability of an explanation (to random and adversarial
perturbations) to ensure that the results are trustable. To this end, we
propose a test to evaluate the robustness to non-adversarial perturbations and
an ensemble approach to analyse more in depth the robustness of XAI methods
applied to neural networks and tabular datasets. We will show how leveraging
manifold hypothesis and ensemble approaches can be beneficial to an in-depth
analysis of the robustness.Comment: 8 pages, 3 figure