One of the main drivers of the recent advances in authorship verification is
the PAN large-scale authorship dataset. Despite generating significant progress
in the field, inconsistent performance differences between the closed and open
test sets have been reported. To this end, we improve the experimental setup by
proposing five new public splits over the PAN dataset, specifically designed to
isolate and identify biases related to the text topic and to the author's
writing style. We evaluate several BERT-like baselines on these splits, showing
that such models are competitive with authorship verification state-of-the-art
methods. Furthermore, using explainable AI, we find that these baselines are
biased towards named entities. We show that models trained without the named
entities obtain better results and generalize better when tested on DarkReddit,
our new dataset for authorship verification.Comment: Accepted as a short paper at the EMNLP 2022 conference. 10 pages, 5
figures, 9 table