1 research outputs found
An Empirical Study of Pre-trained Transformers for Arabic Information Extraction
Multilingual pre-trained Transformers, such as mBERT (Devlin et al., 2019)
and XLM-RoBERTa (Conneau et al., 2020a), have been shown to enable the
effective cross-lingual zero-shot transfer. However, their performance on
Arabic information extraction (IE) tasks is not very well studied. In this
paper, we pre-train a customized bilingual BERT, dubbed GigaBERT, that is
designed specifically for Arabic NLP and English-to-Arabic zero-shot transfer
learning. We study GigaBERT's effectiveness on zero-short transfer across four
IE tasks: named entity recognition, part-of-speech tagging, argument role
labeling, and relation extraction. Our best model significantly outperforms
mBERT, XLM-RoBERTa, and AraBERT (Antoun et al., 2020) in both the supervised
and zero-shot transfer settings. We have made our pre-trained models publicly
available at https://github.com/lanwuwei/GigaBERT.Comment: 8 pages, EMNLP 202