Transformer-based models have gained popularity in the field of natural
language processing (NLP) and are extensively utilized in computer vision tasks
and multi-modal models such as GPT4. This paper presents a novel method to
enhance the explainability of Transformer-based image classification models.
Our method aims to improve trust in classification results and empower users to
gain a deeper understanding of the model for downstream tasks by providing
visualizations of class-specific maps. We introduce two modules: the
``Relationship Weighted Out" and the ``Cut" modules. The ``Relationship
Weighted Out" module focuses on extracting class-specific information from
intermediate layers, enabling us to highlight relevant features. Additionally,
the ``Cut" module performs fine-grained feature decomposition, taking into
account factors such as position, texture, and color. By integrating these
modules, we generate dense class-specific visual explainability maps. We
validate our method with extensive qualitative and quantitative experiments on
the ImageNet dataset. Furthermore, we conduct a large number of experiments on
the LRN dataset, specifically designed for automatic driving danger alerts, to
evaluate the explainability of our method in complex backgrounds. The results
demonstrate a significant improvement over previous methods. Moreover, we
conduct ablation experiments to validate the effectiveness of each module.
Through these experiments, we are able to confirm the respective contributions
of each module, thus solidifying the overall effectiveness of our proposed
approach