1 research outputs found
Explainable Matrix -- Visualization for Global and Local Interpretability of Random Forest Classification Ensembles
Over the past decades, classification models have proven to be essential
machine learning tools given their potential and applicability in various
domains. In these years, the north of the majority of the researchers had been
to improve quantitative metrics, notwithstanding the lack of information about
models' decisions such metrics convey. This paradigm has recently shifted, and
strategies beyond tables and numbers to assist in interpreting models'
decisions are increasing in importance. Part of this trend, visualization
techniques have been extensively used to support classification models'
interpretability, with a significant focus on rule-based models. Despite the
advances, the existing approaches present limitations in terms of visual
scalability, and the visualization of large and complex models, such as the
ones produced by the Random Forest (RF) technique, remains a challenge. In this
paper, we propose Explainable Matrix (ExMatrix), a novel visualization method
for RF interpretability that can handle models with massive quantities of
rules. It employs a simple yet powerful matrix-like visual metaphor, where rows
are rules, columns are features, and cells are rules predicates, enabling the
analysis of entire models and auditing classification results. ExMatrix
applicability is confirmed via different examples, showing how it can be used
in practice to promote RF models interpretability.Comment: IEEE VIS VAST 202