5,655 research outputs found
A Survey of Cross-Lingual Sentiment Analysis Based on Pre-Trained Models
With the technology development of natural language processing, many researchers have studied Machine Learning (ML), Deep Learning (DL), monolingual Sentiment Analysis (SA) widely. However, there is not much work on Cross-Lingual SA (CLSA), although it is beneficial when dealing with low resource languages (e.g., Tamil, Malayalam, Hindi, and Arabic). This paper surveys the main challenges and issues of CLSA based on some pre-trained language models and mentions the leading methods to cope with CLSA. In particular, we compare and analyze their pros and cons. Moreover, we summarize the valuable cross-lingual resources and point out the main problems researchers need to solve in the future
HausaNLP at SemEval-2023 Task 12: Leveraging African Low Resource TweetData for Sentiment Analysis
We present the findings of SemEval-2023 Task 12, a shared task on sentiment
analysis for low-resource African languages using Twitter dataset. The task
featured three subtasks; subtask A is monolingual sentiment classification with
12 tracks which are all monolingual languages, subtask B is multilingual
sentiment classification using the tracks in subtask A and subtask C is a
zero-shot sentiment classification. We present the results and findings of
subtask A, subtask B and subtask C. We also release the code on github. Our
goal is to leverage low-resource tweet data using pre-trained Afro-xlmr-large,
AfriBERTa-Large, Bert-base-arabic-camelbert-da-sentiment (Arabic-camelbert),
Multilingual-BERT (mBERT) and BERT models for sentiment analysis of 14 African
languages. The datasets for these subtasks consists of a gold standard
multi-class labeled Twitter datasets from these languages. Our results
demonstrate that Afro-xlmr-large model performed better compared to the other
models in most of the languages datasets. Similarly, Nigerian languages: Hausa,
Igbo, and Yoruba achieved better performance compared to other languages and
this can be attributed to the higher volume of data present in the languages
Computational Sociolinguistics: A Survey
Language is a social phenomenon and variation is inherent to its social
nature. Recently, there has been a surge of interest within the computational
linguistics (CL) community in the social dimension of language. In this article
we present a survey of the emerging field of "Computational Sociolinguistics"
that reflects this increased interest. We aim to provide a comprehensive
overview of CL research on sociolinguistic themes, featuring topics such as the
relation between language and social identity, language use in social
interaction and multilingual communication. Moreover, we demonstrate the
potential for synergy between the research communities involved, by showing how
the large-scale data-driven methods that are widely used in CL can complement
existing sociolinguistic studies, and how sociolinguistics can inform and
challenge the methods and assumptions employed in CL studies. We hope to convey
the possible benefits of a closer collaboration between the two communities and
conclude with a discussion of open challenges.Comment: To appear in Computational Linguistics. Accepted for publication:
18th February, 201
- …