Despite their successes in NLP, Transformer-based language models still
require extensive computing resources and suffer in low-resource or low-compute
settings. In this paper, we present AxomiyaBERTa, a novel BERT model for
Assamese, a morphologically-rich low-resource language (LRL) of Eastern India.
AxomiyaBERTa is trained only on the masked language modeling (MLM) task,
without the typical additional next sentence prediction (NSP) objective, and
our results show that in resource-scarce settings for very low-resource
languages like Assamese, MLM alone can be successfully leveraged for a range of
tasks. AxomiyaBERTa achieves SOTA on token-level tasks like Named Entity
Recognition and also performs well on "longer-context" tasks like Cloze-style
QA and Wiki Title Prediction, with the assistance of a novel embedding
disperser and phonological signals respectively. Moreover, we show that
AxomiyaBERTa can leverage phonological signals for even more challenging tasks,
such as a novel cross-document coreference task on a translated version of the
ECB+ corpus, where we present a new SOTA result for an LRL. Our source code and
evaluation scripts may be found at https://github.com/csu-signal/axomiyaberta.Comment: 16 pages, 6 figures, 8 tables, appearing in Findings of the ACL: ACL
2023. This version compiled using pdfLaTeX-compatible Assamese script font.
Assamese text may appear differently here than in official ACL 2023
proceeding