Predicting music emotion with social media discourse

Abstract

Predicting the average affect of a piece of music is a task which has been of recent interest in the field of music information retrieval. We investigate the use of sentiment analysis on online social media conversations to predict a song’s valence and arousal. Using four music emotion datasets - DEAM, AMG1608, Deezer, and PmEmo, we create a corpus of social media commentary surrounding the songs contained in these datasets by extracting comments from YouTube, Twitter, and Reddit. Two learning approaches are compared − one bag-of-words model using dictionaries of affective terms to extract emotive features, and a DistilBERT transformer model fine-tuned on our social media discourse to perform direct comment-level valence and arousal prediction. We find that transformer models are better suited to the task of predicting music emotion directly from social media conversations

    Similar works