Transformers are the most eminent architectures used for a vast range of
Natural Language Processing tasks. These models are pre-trained over a large
text corpus and are meant to serve state-of-the-art results over tasks like
text classification. In this work, we conduct a comparative study between
monolingual and multilingual BERT models. We focus on the Marathi language and
evaluate the models on the datasets for hate speech detection, sentiment
analysis and simple text classification in Marathi. We use standard
multilingual models such as mBERT, indicBERT and xlm-RoBERTa and compare with
MahaBERT, MahaALBERT and MahaRoBERTa, the monolingual models for Marathi. We
further show that Marathi monolingual models outperform the multilingual BERT
variants on five different downstream fine-tuning experiments. We also evaluate
sentence embeddings from these models by freezing the BERT encoder layers. We
show that monolingual MahaBERT based models provide rich representations as
compared to sentence embeddings from multi-lingual counterparts. However, we
observe that these embeddings are not generic enough and do not work well on
out of domain social media datasets. We consider two Marathi hate speech
datasets L3Cube-MahaHate, HASOC-2021, a Marathi sentiment classification
dataset L3Cube-MahaSent, and Marathi Headline, Articles classification
datasets