12,334 research outputs found

    Affective Music Information Retrieval

    Full text link
    Much of the appeal of music lies in its power to convey emotions/moods and to evoke them in listeners. In consequence, the past decade witnessed a growing interest in modeling emotions from musical signals in the music information retrieval (MIR) community. In this article, we present a novel generative approach to music emotion modeling, with a specific focus on the valence-arousal (VA) dimension model of emotion. The presented generative model, called \emph{acoustic emotion Gaussians} (AEG), better accounts for the subjectivity of emotion perception by the use of probability distributions. Specifically, it learns from the emotion annotations of multiple subjects a Gaussian mixture model in the VA space with prior constraints on the corresponding acoustic features of the training music pieces. Such a computational framework is technically sound, capable of learning in an online fashion, and thus applicable to a variety of applications, including user-independent (general) and user-dependent (personalized) emotion recognition and emotion-based music retrieval. We report evaluations of the aforementioned applications of AEG on a larger-scale emotion-annotated corpora, AMG1608, to demonstrate the effectiveness of AEG and to showcase how evaluations are conducted for research on emotion-based MIR. Directions of future work are also discussed.Comment: 40 pages, 18 figures, 5 tables, author versio

    Tune in to your emotions: a robust personalized affective music player

    Get PDF
    The emotional power of music is exploited in a personalized affective music player (AMP) that selects music for mood enhancement. A biosignal approach is used to measure listeners’ personal emotional reactions to their own music as input for affective user models. Regression and kernel density estimation are applied to model the physiological changes the music elicits. Using these models, personalized music selections based on an affective goal state can be made. The AMP was validated in real-world trials over the course of several weeks. Results show that our models can cope with noisy situations and handle large inter-individual differences in the music domain. The AMP augments music listening where its techniques enable automated affect guidance. Our approach provides valuable insights for affective computing and user modeling, for which the AMP is a suitable carrier application

    Current Challenges and Visions in Music Recommender Systems Research

    Full text link
    Music recommender systems (MRS) have experienced a boom in recent years, thanks to the emergence and success of online streaming services, which nowadays make available almost all music in the world at the user's fingertip. While today's MRS considerably help users to find interesting music in these huge catalogs, MRS research is still facing substantial challenges. In particular when it comes to build, incorporate, and evaluate recommendation strategies that integrate information beyond simple user--item interactions or content-based descriptors, but dig deep into the very essence of listener needs, preferences, and intentions, MRS research becomes a big endeavor and related publications quite sparse. The purpose of this trends and survey article is twofold. We first identify and shed light on what we believe are the most pressing challenges MRS research is facing, from both academic and industry perspectives. We review the state of the art towards solving these challenges and discuss its limitations. Second, we detail possible future directions and visions we contemplate for the further evolution of the field. The article should therefore serve two purposes: giving the interested reader an overview of current challenges in MRS research and providing guidance for young researchers by identifying interesting, yet under-researched, directions in the field

    How are you doing? : emotions and personality in Facebook

    Get PDF
    User generated content on social media sites is a rich source of information about latent variables of their users. Proper mining of this content provides a shortcut to emotion and personality detection of users without filling out questionnaires. This in turn increases the application potential of personalized services that rely on the knowledge of such latent variables. In this paper we contribute to this emerging domain by studying the relation between emotions expressed in approximately 1 million Facebook (FB) status updates and the users' age, gender and personality. Additionally, we investigate the relations between emotion expression and the time when the status updates were posted. In particular, we find that female users are more emotional in their status posts than male users. In addition, we find a relation between age and sharing of emotions. Older FB users share their feelings more often than young users. In terms of seasons, people post about emotions less frequently in summer. On the other hand, December is a time when people are more likely to share their positive feelings with their friends. We also examine the relation between users' personality and their posts. We find that users who have an open personality express their emotions more frequently, while neurotic users are more reserved to share their feelings

    Affective image content analysis: two decades review and new perspectives

    Get PDF

    Affective Image Content Analysis: Two Decades Review and New Perspectives

    Get PDF
    Images can convey rich semantics and induce various emotions in viewers. Recently, with the rapid advancement of emotional intelligence and the explosive growth of visual data, extensive research efforts have been dedicated to affective image content analysis (AICA). In this survey, we will comprehensively review the development of AICA in the recent two decades, especially focusing on the state-of-the-art methods with respect to three main challenges -- the affective gap, perception subjectivity, and label noise and absence. We begin with an introduction to the key emotion representation models that have been widely employed in AICA and description of available datasets for performing evaluation with quantitative comparison of label noise and dataset bias. We then summarize and compare the representative approaches on (1) emotion feature extraction, including both handcrafted and deep features, (2) learning methods on dominant emotion recognition, personalized emotion prediction, emotion distribution learning, and learning from noisy data or few labels, and (3) AICA based applications. Finally, we discuss some challenges and promising research directions in the future, such as image content and context understanding, group emotion clustering, and viewer-image interaction.Comment: Accepted by IEEE TPAM

    Emotional Brain-Computer Interfaces

    Get PDF
    Research in Brain-computer interface (BCI) has significantly increased during the last few years. In addition to their initial role as assisting devices for the physically challenged, BCIs are now proposed for a wider range of applications. As in any HCI application, BCIs can also benefit from adapting their operation to the emotional state of the user. BCIs have the advantage of having access to brain activity which can provide signicant insight into the user's emotional state. This information can be utilized in two manners. 1) Knowledge of the inuence of the emotional state on brain activity patterns can allow the BCI to adapt its recognition algorithms, so that the intention of the user is still correctly interpreted in spite of signal deviations induced by the subject's emotional state. 2) The ability to recognize emotions can be used in BCIs to provide the user with more natural ways of controlling the BCI through affective modulation. Thus, controlling a BCI by recollecting a pleasant memory can be possible and can potentially lead to higher information transfer rates.\ud These two approaches of emotion utilization in BCI are elaborated in detail in this paper in the framework of noninvasive EEG based BCIs
    corecore