Nowadays, deep neural networks are widely used in a variety of fields that
have a direct impact on society. Although those models typically show
outstanding performance, they have been used for a long time as black boxes. To
address this, Explainable Artificial Intelligence (XAI) has been developing as
a field that aims to improve the transparency of the model and increase their
trustworthiness. We propose a retraining pipeline that consistently improves
the model predictions starting from XAI and utilizing state-of-the-art
techniques. To do that, we use the XAI results, namely SHapley Additive
exPlanations (SHAP) values, to give specific training weights to the data
samples. This leads to an improved training of the model and, consequently,
better performance. In order to benchmark our method, we evaluate it on both
real-life and public datasets. First, we perform the method on a radar-based
people counting scenario. Afterward, we test it on the CIFAR-10, a public
Computer Vision dataset. Experiments using the SHAP-based retraining approach
achieve a 4% more accuracy w.r.t. the standard equal weight retraining for
people counting tasks. Moreover, on the CIFAR-10, our SHAP-based weighting
strategy ends up with a 3% accuracy rate than the training procedure with equal
weighted samples.Comment: accepted at ICMLA 202