1,161 research outputs found

    Musemo: Express Musical Emotion Based on Neural Network

    Get PDF
    Department of Urban and Environmental Engineering (Convergence of Science and Arts)Music elicits emotional responses, which enable people to empathize with the emotional states induced by music, experience changes in their current feelings, receive comfort, and relieve stress (Juslin & Laukka, 2004). Music emotion recognition (MER) is a field of research that extracts emotions from music through various systems and methods. Interest in this field is increasing as researchers try to use it for psychiatric purposes. In order to extract emotions from music, MER requires music and emotion labels for each music. Many MER studies use emotion labels created by non-music-specific psychologists such as Russell???s circumplex model of affects (Russell, 1980) and Ekman???s six basic emotions (Ekman, 1999). However, Zentner, Grandjean, and Scherer suggest that emotions commonly used in music are subdivided into specific areas, rather than spread across the entire spectrum of emotions (Zentner, Grandjean, & Scherer, 2008). Thus, existing MER studies have difficulties with the emotion labels that are not widely agreed through musicians and listeners. This study proposes a musical emotion recognition model ???Musemo??? that follows the Geneva emotion music scale proposed by music psychologists based on a convolution neural network. We evaluate the accuracy of the model by varying the length of music samples used as input of Musemo and achieved RMSE (root mean squared error) performance of up to 14.91%. Also, we examine the correlation among emotion labels by reducing the Musemo???s emotion output vector to two dimensions through principal component analysis. Consequently, we can get results that are similar to the study that Vuoskoski and Eerola analyzed for the Geneva emotion music scale (Vuoskoski & Eerola, 2011). We hope that this study could be expanded to inform treatments to comfort those in need of psychological empathy in modern society.clos

    Crowdsourcing Emotions in Music Domain

    Get PDF
    An important source of intelligence for music emotion recognition today comes from user-provided community tags about songs or artists. Recent crowdsourcing approaches such as harvesting social tags, design of collaborative games and web services or the use of Mechanical Turk, are becoming popular in the literature. They provide a cheap, quick and efficient method, contrary to professional labeling of songs which is expensive and does not scale for creating large datasets. In this paper we discuss the viability of various crowdsourcing instruments providing examples from research works. We also share our own experience, illustrating the steps we followed using tags collected from Last.fm for the creation of two music mood datasets which are rendered public. While processing affect tags of Last.fm, we observed that they tend to be biased towards positive emotions; the resulting dataset thus contain more positive songs than negative ones

    Emotion and Sentiment in Social and Expressive Media: Introduction to the special issue

    Full text link
    This is the author’s version of a work that was accepted for publication in Information Processing and Management . Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Information Processing and Management 52 (2016) 1–4. DOI 10.1016/j.ipm.2015.11.002[EN] Social and expressive media represent a challenge and a push forward for research on emotion and sentiment analysis. The advent of social media has brought about new paradigms of interaction that foster first-person engagement and crowdsourced contents: the subjective dimension moves to the foreground, opening the way to the emergence of an affective component within a dynamic corpus of digitized contents created and enriched by the users. Expressive media, which play a key role in fields related to creativity, such as figurative arts, music or drama, gather multimedia contents into online social environments, by joining the social dimension with the aims of artistic creation and self-expression. Artistic creation and performance seem to be a very interesting testbed for cross-validating and possibly integrating approaches, models and tools for automatically analyzing emotion and sentiment. In fact, in such contexts the social and affective dimensions (emotions and feelings) naturally emerge (Silvia, 2005), think for instance of the visitors’ feedback to a real or virtual art exhibition, or of the audience–performance interaction (...) In light of these considerations, this special issue focuses on the presentation and discussion of a set of novel computational approaches to the analysis of emotion and sentiment in social and expressive media.Paolo Rosso has been partially funded by the WIQ–EI IRSES project (Grant no. 269180) within the EC FP7 Marie Curie People Framework and by the DIANA-APPLICATIONS – Finding Hidden Knowledge in Texts: Applications project (TIN2012-38603-C02-01). The last phase of the work of Viviana Patti was carried out at the Universitat Politècnica de València in the framework of a three-month fellowship of the University of Turin within the World Wide Style (WWS) Program, Second Edition, co-funded by Fondazione CRT .Rosso, P.; Bosco, C.; Damiano, R.; Patti, V.; Cambria, E. (2016). Emotion and Sentiment in Social and Expressive Media: Introduction to the special issue. Information Processing and Management. 52(1):1-4. https://doi.org/10.1016/j.ipm.2015.11.002S1452

    Musical preference but not familiarity influences subjective ratings and psychophysiological correlates of music-induced emotions

    Get PDF
    Listening to music prompts strong emotional reactions in the listeners but relatively little research has focused on individual differences. This study addresses the role of musical preference and familiarity on emotions induced through music. A sample of 50 healthy participants (25 women) listened to 42 excerpts from the FMMS during 8 s while their autonomic and facial EMG responses were continuously recorded. Then, affective dimensions (hedonic valence, tension arousal, and energy arousal) and musical preference were rated using a 9-point scale, as well as familiarity using a 3-point scale. It was hypothesized that preferred and familiar music would be evaluated as more pleasant, energetic and less tense, and would prompt an increase of autonomic and zygomatic responses, and a decrease of corrugator activity. Results partially confirmed our hypothesis showing a strong effect of musical preference but not familiarity on emotion correlates. Specifically, musical preference predicted valence ratings, as well as HR acceleration and facial EMG activity. Overall, current findings suggested a great influence of musical preference on music-induced emotions, particularly modulating hedonic valence correlates. Our findings add evidence about the role of individual differences in the emotional processing through music and suggest the importance of considering those variables in future studies

    Music emotion recognition: a multimodal machine learning approach

    Get PDF
    Music emotion recognition (MER) is an emerging domain of the Music Information Retrieval (MIR) scientific community, and besides, music searches through emotions are one of the major selection preferred by web users. As the world goes to digital, the musical contents in online databases, such as Last.fm have expanded exponentially, which require substantial manual efforts for managing them and also keeping them updated. Therefore, the demand for innovative and adaptable search mechanisms, which can be personalized according to users’ emotional state, has gained increasing consideration in recent years. This thesis concentrates on addressing music emotion recognition problem by presenting several classification models, which were fed by textual features, as well as audio attributes extracted from the music. In this study, we build both supervised and semisupervised classification designs under four research experiments, that addresses the emotional role of audio features, such as tempo, acousticness, and energy, and also the impact of textual features extracted by two different approaches, which are TF-IDF and Word2Vec. Furthermore, we proposed a multi-modal approach by using a combined feature-set consisting of the features from the audio content, as well as from context-aware data. For this purpose, we generated a ground truth dataset containing over 1500 labeled song lyrics and also unlabeled big data, which stands for more than 2.5 million Turkish documents, for achieving to generate an accurate automatic emotion classification system. The analytical models were conducted by adopting several algorithms on the crossvalidated data by using Python. As a conclusion of the experiments, the best-attained performance was 44.2% when employing only audio features, whereas, with the usage of textual features, better performances were observed with 46.3% and 51.3% accuracy scores considering supervised and semi-supervised learning paradigms, respectively. As of last, even though we created a comprehensive feature set with the combination of audio and textual features, this approach did not display any significant improvement for classification performanc

    The Effects of Music-Induced Emotion on Memory

    Get PDF
    Emotion can play a highly influential role when it comes to enhancing memory. Research has shown that emotional valence and emotional arousal are two key aspects of emotion responsible for facilitating this (APA, 2013). However, various studies have found contradicting results when it comes to which type of valence (positive or negative) and which level of arousal (high or low) have the greatest memory enhancing effects. Similarly, the majority of previous research has specifically investigated this emotion-memory relationship in terms of memory for emotional content. The present study aims to address this gap by separating emotion from the to-be-learned stimuli, instead investigating how one’s emotional state while encoding neutral information, impacts memory for that information later on. After inducing specific emotional states via exposure to affectively-rated music, subjects were exposed to a video reel composed of various neutral clips of random scenes. Memory was then measured based on performance within a subsequently presented “yes”/ “no\u27\u27 recognition task. Characterizing “conditions\u27\u27 based on the four arousal-valence quadrants of Russell’s circumplex model of emotion (1980): high arousal-positive valence, high arousal-negative valence, low arousal-positive valence, low arousal-negative valence, I predicted that, compared to the other groups, the subjects in the high arousal-negative valence condition would perform best on the memory task. Results did not support this hypothesis, yielding no significant differences in memory performance between the four conditions. The limitations of this study design are considered and suggestions are made for future research
    corecore