AfriSenti-SemEval Shared Task 12 of SemEval-2023. The task aims to perform
monolingual sentiment classification (sub-task A) for 12 African languages,
multilingual sentiment classification (sub-task B), and zero-shot sentiment
classification (task C). For sub-task A, we conducted experiments using
classical machine learning classifiers, Afro-centric language models, and
language-specific models. For task B, we fine-tuned multilingual pre-trained
language models that support many of the languages in the task. For task C, we
used we make use of a parameter-efficient Adapter approach that leverages
monolingual texts in the target language for effective zero-shot transfer. Our
findings suggest that using pre-trained Afro-centric language models improves
performance for low-resource African languages. We also ran experiments using
adapters for zero-shot tasks, and the results suggest that we can obtain
promising results by using adapters with a limited amount of resources.Comment: SemEval 202