2 research outputs found
Multilingual Language Models are not Multicultural: A Case Study in Emotion
Emotions are experienced and expressed differently across the world. In order
to use Large Language Models (LMs) for multilingual tasks that require
emotional sensitivity, LMs must reflect this cultural variation in emotion. In
this study, we investigate whether the widely-used multilingual LMs in 2023
reflect differences in emotional expressions across cultures and languages. We
find that embeddings obtained from LMs (e.g., XLM-RoBERTa) are Anglocentric,
and generative LMs (e.g., ChatGPT) reflect Western norms, even when responding
to prompts in other languages. Our results show that multilingual LMs do not
successfully learn the culturally appropriate nuances of emotion and we
highlight possible research directions towards correcting this.Comment: Accepted to WASSA at ACL 202