1,202 research outputs found

    Affective Music Information Retrieval

    Full text link
    Much of the appeal of music lies in its power to convey emotions/moods and to evoke them in listeners. In consequence, the past decade witnessed a growing interest in modeling emotions from musical signals in the music information retrieval (MIR) community. In this article, we present a novel generative approach to music emotion modeling, with a specific focus on the valence-arousal (VA) dimension model of emotion. The presented generative model, called \emph{acoustic emotion Gaussians} (AEG), better accounts for the subjectivity of emotion perception by the use of probability distributions. Specifically, it learns from the emotion annotations of multiple subjects a Gaussian mixture model in the VA space with prior constraints on the corresponding acoustic features of the training music pieces. Such a computational framework is technically sound, capable of learning in an online fashion, and thus applicable to a variety of applications, including user-independent (general) and user-dependent (personalized) emotion recognition and emotion-based music retrieval. We report evaluations of the aforementioned applications of AEG on a larger-scale emotion-annotated corpora, AMG1608, to demonstrate the effectiveness of AEG and to showcase how evaluations are conducted for research on emotion-based MIR. Directions of future work are also discussed.Comment: 40 pages, 18 figures, 5 tables, author versio

    A cross-cultural study of mood in K-POP Songs

    Get PDF
    Prior research suggests that music mood is one of the most important criteria when people look for music – but the perception of mood may be subjective and can be influenced by many factors including the listeners’ cultural background. In recent years, the number of studies of music mood perceptions by various cultural groups and of automated mood classification of music from different cultures has been increasing. However, there has yet to be a well-established testbed for evaluating cross-cultural tasks in Music Information Retrieval (MIR). Moreover, most existing datasets in MIR consist mainly of Western music and the cultural backgrounds of the annotators were mostly not taken into consideration or were limited to one cultural group. In this study, we built a collection of 1,892 K-pop (Korean Pop) songs with mood annotations collected from both Korean and American listeners, based on three different mood models. We analyze the differences and similarities between the mood judgments of the two listener groups, and propose potential MIR tasks that can be evaluated on this dataset. © Xiao Hu, Jin Ha Lee, Kahyun Choi, J. Stephen Downie.published_or_final_versio

    A multi-genre model for music emotion recognition using linear regressors

    Get PDF
    Making the link between human emotion and music is challenging. Our aim was to produce an efficient system that emotionally rates songs from multiple genres. To achieve this, we employed a series of online self-report studies, utilising Russell's circumplex model. The first study (n = 44) identified audio features that map to arousal and valence for 20 songs. From this, we constructed a set of linear regressors. The second study (n = 158) measured the efficacy of our system, utilising 40 new songs to create a ground truth. Results show our approach may be effective at emotionally rating music, particularly in the prediction of valence

    Predicting the emotions expressed in music

    Get PDF
    • …
    corecore