2,376 research outputs found

    Towards a Critical Understanding of Music, Emotion and Self-Identity

    No full text
    The article begins by outlining a dominant conception of these relations in sociologically informed analysis of music, which sees music primarily as a positive resource for active self-making. My argument is that this conception rests on a problematic notion of the self and also on an overly optimistic understanding of music, which implicitly sees music as highly independent of negative social and historical processes. I then attempt to construct a) a more adequately critical conception of personal identity in modern societies; and b) a more balanced appraisal of music-society relations. I suggest two ways in which relations between self, music and society may not always be quite so positive or as healthy as the dominant conception suggests: 1) Music is now bound up with the incorporation of authenticity and creativity into capitalism, and with intensified consumption habits. 2) Emotional self-realisation through music is now linked to status competition. Interviews are analysed

    Stacked Convolutional and Recurrent Neural Networks for Music Emotion Recognition

    Get PDF
    This paper studies the emotion recognition from musical tracks in the 2-dimensional valence-arousal (V-A) emotional space. We propose a method based on convolutional (CNN) and recurrent neural networks (RNN), having significantly fewer parameters compared with the state-of-the-art method for the same task. We utilize one CNN layer followed by two branches of RNNs trained separately for arousal and valence. The method was evaluated using the 'MediaEval2015 emotion in music' dataset. We achieved an RMSE of 0.202 for arousal and 0.268 for valence, which is the best result reported on this dataset.Comment: Accepted for Sound and Music Computing (SMC 2017

    Seeing sound “How to generate visual artworks by analysing a music track and representing it in terms of emotion analysis and musical features?”

    Get PDF
    Music and visual artwork are a valuable part of our daily life. Since both media induce human emotion, this thesis demonstrates how to convert music into visual artwork such as generative art. Especially, the project shows the method of connecting music emotion to the theme of colour. This thesis describes the human emotional model based on arousal and valence. Also, this thesis explains how colour affects our emotion. In order to connect music emotion into the colour theme, this thesis shows the method to retrieve music information which includes arousal and valence of the music. In order to generate visual artwork from the music, this thesis demonstrates the implementation of working software that integrates music emotion and musical characteristics such as frequency analysis. Besides, this thesis presents how to apply generative artwork into our daily life products. This thesis discusses learning outcomes from the project based on practice-based research methodology. Also, this thesis introduces a further plan related to AI

    Affective Music Information Retrieval

    Full text link
    Much of the appeal of music lies in its power to convey emotions/moods and to evoke them in listeners. In consequence, the past decade witnessed a growing interest in modeling emotions from musical signals in the music information retrieval (MIR) community. In this article, we present a novel generative approach to music emotion modeling, with a specific focus on the valence-arousal (VA) dimension model of emotion. The presented generative model, called \emph{acoustic emotion Gaussians} (AEG), better accounts for the subjectivity of emotion perception by the use of probability distributions. Specifically, it learns from the emotion annotations of multiple subjects a Gaussian mixture model in the VA space with prior constraints on the corresponding acoustic features of the training music pieces. Such a computational framework is technically sound, capable of learning in an online fashion, and thus applicable to a variety of applications, including user-independent (general) and user-dependent (personalized) emotion recognition and emotion-based music retrieval. We report evaluations of the aforementioned applications of AEG on a larger-scale emotion-annotated corpora, AMG1608, to demonstrate the effectiveness of AEG and to showcase how evaluations are conducted for research on emotion-based MIR. Directions of future work are also discussed.Comment: 40 pages, 18 figures, 5 tables, author versio

    EMIR: A novel emotion-based music retrieval system

    Get PDF
    Music is inherently expressive of emotion meaning and affects the mood of people. In this paper, we present a novel EMIR (Emotional Music Information Retrieval) System that uses latent emotion elements both in music and non-descriptive queries (NDQs) to detect implicit emotional association between users and music to enhance Music Information Retrieval (MIR). We try to understand the latent emotional intent of queries via machine learning for emotion classification and compare the performance of emotion detection approaches on different feature sets. For this purpose, we extract music emotion features from lyrics and social tags crawled from the Internet, label some for training and model them in high-dimensional emotion space and recognize latent emotion of users by query emotion analysis. The similarity between queries and music is computed by verified BM25 model

    Emotion resonance and divergence: a semiotic analysis of music and sound in 'The Lost Thing', an animated short film and 'Elizabeth' a film trailer

    Get PDF
    Music and sound contributions of interpersonal meaning to film narratives may be different from or similar to meanings made by language and image, and dynamic interactions between several modalities may generate new story messages. Such interpretive potentials of music and voice sound in motion pictures are rarely considered in social semiotic investigations of intermodality. This paper therefore shares two semiotic studies of distinct and combined music, English speech and image systems in an animated short film and a promotional filmtrailer. The paper considers the impact of music and voice sound on interpretations of film narrative meanings. A music system relevant to the analysis of filmic emotion is proposed. Examples show how music and intonation contribute meaning to lexical, visual and gestural elements of the cinematic spaces. Also described are relations of divergence and resonance between emotion types in various couplings of music, intonation, words and images across story phases. The research is relevant to educational knowledge about sound, and semiotic studies of multimodality
    corecore