We present a novel feature attribution method for explaining text
classifiers, and analyze it in the context of hate speech detection. Although
feature attribution models usually provide a single importance score for each
token, we instead provide two complementary and theoretically-grounded scores
-- necessity and sufficiency -- resulting in more informative explanations. We
propose a transparent method that calculates these values by generating
explicit perturbations of the input text, allowing the importance scores
themselves to be explainable. We employ our method to explain the predictions
of different hate speech detection models on the same set of curated examples
from a test suite, and show that different values of necessity and sufficiency
for identity terms correspond to different kinds of false positive errors,
exposing sources of classifier bias against marginalized groups.Comment: NAACL 202