Machine learning (ML) models are often valued by the accuracy of their
predictions. However, in some areas of science, the inner workings of models
are as relevant as their accuracy. To understand how ML models work internally,
the use of interpretability algorithms is the preferred option. Unfortunately,
despite the diversity of algorithms available, they often disagree in
explaining a model, leading to contradictory explanations. To cope with this
issue, consensus functions can be applied once the models have been explained.
Nevertheless, the problem is not completely solved because the final result
will depend on the selected consensus function and other factors. In this
paper, six consensus functions have been evaluated for the explanation of five
ML models. The models were previously trained on four synthetic datasets whose
internal rules were known in advance. The models were then explained with
model-agnostic local and global interpretability algorithms. Finally, consensus
was calculated with six different functions, including one developed by the
authors. The results demonstrated that the proposed function is fairer than the
others and provides more consistent and accurate explanations.Comment: 21 pages, 7 figure