25,749 research outputs found
Affective Music Information Retrieval
Much of the appeal of music lies in its power to convey emotions/moods and to
evoke them in listeners. In consequence, the past decade witnessed a growing
interest in modeling emotions from musical signals in the music information
retrieval (MIR) community. In this article, we present a novel generative
approach to music emotion modeling, with a specific focus on the
valence-arousal (VA) dimension model of emotion. The presented generative
model, called \emph{acoustic emotion Gaussians} (AEG), better accounts for the
subjectivity of emotion perception by the use of probability distributions.
Specifically, it learns from the emotion annotations of multiple subjects a
Gaussian mixture model in the VA space with prior constraints on the
corresponding acoustic features of the training music pieces. Such a
computational framework is technically sound, capable of learning in an online
fashion, and thus applicable to a variety of applications, including
user-independent (general) and user-dependent (personalized) emotion
recognition and emotion-based music retrieval. We report evaluations of the
aforementioned applications of AEG on a larger-scale emotion-annotated corpora,
AMG1608, to demonstrate the effectiveness of AEG and to showcase how
evaluations are conducted for research on emotion-based MIR. Directions of
future work are also discussed.Comment: 40 pages, 18 figures, 5 tables, author versio
Using fuzzy logic to handle the users' semantic descriptions in a music retrieval system
This paper provides an investigation of the potential application of fuzzy logic to semantic music recommendation. We show that a set of affective/emotive, structural and kinaesthetic descriptors can be used to formulate a query which allows the retrieval of intended music. A semantic music recommendation system was built, based on an elaborate study of potential users of music information retrieval systems. In this study analysis was made of the descriptors that best characterize the user's understanding of music. Significant relationships between expressive and structural descriptions of music were found. A straightforward fuzzy logic methodology was then applied to handle the quality ratings associated with the descriptions. Rigorous real-world testing of the semantic music recommendation system revealed high user satisfaction
Audio Features Affected by Music Expressiveness
Within a Music Information Retrieval perspective, the goal of the study
presented here is to investigate the impact on sound features of the musician's
affective intention, namely when trying to intentionally convey emotional
contents via expressiveness. A preliminary experiment has been performed
involving tuba players. The recordings have been analysed by extracting a
variety of features, which have been subsequently evaluated by combining both
classic and machine learning statistical techniques. Results are reported and
discussed.Comment: Submitted to ACM SIGIR Conference on Research and Development in
Information Retrieval (SIGIR 2016), Pisa, Italy, July 17-21, 201
Using fuzzy logic to handle the semantic descriptions of music in a content-based retrieval system
This paper explores the potential use of fuzzy logic for semantic music recommendation. We show that a set of affective/emotive, structural and kinaesthetic descriptors can be used to formulate a query which allows the retrieval of intended music. A semantic music recommendation system was built, based on an elaborate study of potential users and an analysis of the semantic descriptors that best characterize the user’s understanding of music. Significant relationships between expressive and structural semantic descriptions of music were found. Fuzzy logic was then applied to handle the
quality ratings associated with the semantic descriptions. A working semantic music recommendation system was tested and evaluated. Real-world testing revealed high user satisfaction
Recommended from our members
Creative professional users musical relevance criteria
Although known item searching for music can be dealt with by searching metadata using existing text search techniques, human subjectivity and variability within the music itself make it very difficult to search for unknown items. This paper examines these problems within the context of text retrieval and music information retrieval. The focus is on ascertaining a relationship between music relevance criteria and those relating to relevance judgements in text retrieval. A data-rich collection of relevance judgements by creative professionals searching for unknown musical items to accompany moving images using real world queries is analysed. The participants in our observations are found to take a socio-cognitive approach and use a range of content and context based criteria. These criteria correlate strongly with those arising from previous text retrieval studies despite the many differences between music and text in their actual content
Recommended from our members
A study of the information needs of the users of a folk music library and the implications for the design of a digital library system
A qualitative study of user information needs is reported, based on a purposive sample of users and potential users of the Vaughan Williams Memorial Library, a small specialist folk music library in North London. The study set out to establish what the user’s (both existing and potential) information needs are, so that the library’s online service may take them into account with its design. The information needs framework proposed by Nicholas (2000) is used as an analytical tool to achieve this end. The demographics of the users were examined in order to establish four user groups: Performer, Academic, Professional and Enthusiast. Important information needs were found to be based on social interaction, and key resources of the library were its staff, the concentration of the collection and the library’s social nature. A collection of broad design requirements are proposed based on the analysis and this study also provided some insights into the issue of musical relevance, which are discussed
Recommended from our members
One's own soundtrack: Affective music synthesis
Computer music usually sounds mechanical; hence, if musicality and music expression of virtual actors could be enhanced according to the user's mood, the quality of experience would be amplified. We present a solution that is based on improvisation using cognitive models, case based reasoning (CBR) and fuzzy values acting on close-to-affect-target musical notes as retrieved from CBR per context. It modifies music pieces according to the interpretation of the user's emotive state as computed by the emotive input acquisition componential of the CALLAS framework. The CALLAS framework incorporates the Pleasure-Arousal- Dominance (PAD) model that reflects emotive state of the user and represents the criteria for the music affectivisation process. Using combinations of positive and negative states for affective dynamics, the octants of temperament space as specified by this model are stored as base reference emotive states in the case repository, each case including a configurable mapping of affectivisation parameters. Suitable previous cases are selected and retrieved by the CBR subsystem to compute solutions for new cases, affect values from which control the music synthesis process allowing for a level of interactivity that makes way for an interesting environment to experiment and learn about expression in music
- …