1 research outputs found
Multimodal Sentiment Analysis of Songs Using Ensemble Classifiers
We consider the problem of performing sentiment analysis on songs by combining
audio and lyrics in a large and varied dataset, using the Million Song
Dataset for audio features and the MusicXMatch dataset for lyric information.
The algorithms presented on this thesis utilize ensemble classifiers as a
method of fusing data vectors from different feature spaces. We find that
multimodal classification outperforms using only audio or only lyrics. This
thesis argues that utilizing signals from different spaces can account for interclass
inconsistencies and leverages class-specific performance. The experimental
results show that multimodal classification not only improves overall
classification, but is also more consistent across different classes.Ope