Large pre-trained language models have become popular for many applications
and form an important backbone of many downstream tasks in natural language
processing (NLP). Applying 'explainable artificial intelligence' (XAI)
techniques to enrich such models' outputs is considered crucial for assuring
their quality and shedding light on their inner workings. However, large
language models are trained on a plethora of data containing a variety of
biases, such as gender biases, affecting model weights and, potentially,
behavior. Currently, it is unclear to what extent such biases also impact model
explanations in possibly unfavorable ways. We create a gender-controlled text
dataset, GECO, in which otherwise identical sentences appear in male and female
forms. This gives rise to ground-truth 'world explanations' for gender
classification tasks, enabling the objective evaluation of the correctness of
XAI methods. We also provide GECOBench, a rigorous quantitative evaluation
framework benchmarking popular XAI methods, applying them to pre-trained
language models fine-tuned to different degrees. This allows us to investigate
how pre-training induces undesirable bias in model explanations and to what
extent fine-tuning can mitigate such explanation bias. We show a clear
dependency between explanation performance and the number of fine-tuned layers,
where XAI methods are observed to particularly benefit from fine-tuning or
complete retraining of embedding layers. Remarkably, this relationship holds
for models achieving similar classification performance on the same task. With
that, we highlight the utility of the proposed gender-controlled dataset and
novel benchmarking approach for research and development of novel XAI methods.
All code including dataset generation, model training, evaluation and
visualization is available at: https://github.com/braindatalab/gecobenchComment: Under revie