4,017 research outputs found
Challenges in cross-cultural/multilingual music information seeking
Understanding and meeting the needs of a broad range of music users across different cultures and languages are central in designing a global music digital library. This exploratory study examines cross-cultural/multilingual music information seeking behaviors and reveals
some important characteristics of these behaviors by analyzing 107 authentic music information queries from a Korean knowledge search portal Naver (knowledge) iN and 150 queries from Google Answers website. We conclude that new sets of access points must be developed to accommodate music queries that cross cultural or language boundaries
Recommended from our members
People-Powered Music: Using User-Generated Tags and Structure in Recommendations
Music recommenders often rely on experts to classify song facets like genre and mood, but user-generated folksonomies hold some advantages over expert classificationsâfolksonomies can reflect the same real-world vocabularies and categorizations that end users employ. We present an approach for using crowd-sourced common sense knowledge to structure user-generated music tags into a folksonomy, and describe how to use this approach to make music recommendations. We then empirically evaluate our âpeople-poweredâ structured content recommender against a more traditional recommender. Our results show that participants slightly preferred the unstructured recommender, rating more of its recommendations as âperfectâ than they did for our approach. An exploration of the reasons behind participantsâ ratings revealed that users behaved differently when tagging songs than when evaluating recommendations, and we discuss the implications of our results for future tagging and recommendation approaches
Affective Music Information Retrieval
Much of the appeal of music lies in its power to convey emotions/moods and to
evoke them in listeners. In consequence, the past decade witnessed a growing
interest in modeling emotions from musical signals in the music information
retrieval (MIR) community. In this article, we present a novel generative
approach to music emotion modeling, with a specific focus on the
valence-arousal (VA) dimension model of emotion. The presented generative
model, called \emph{acoustic emotion Gaussians} (AEG), better accounts for the
subjectivity of emotion perception by the use of probability distributions.
Specifically, it learns from the emotion annotations of multiple subjects a
Gaussian mixture model in the VA space with prior constraints on the
corresponding acoustic features of the training music pieces. Such a
computational framework is technically sound, capable of learning in an online
fashion, and thus applicable to a variety of applications, including
user-independent (general) and user-dependent (personalized) emotion
recognition and emotion-based music retrieval. We report evaluations of the
aforementioned applications of AEG on a larger-scale emotion-annotated corpora,
AMG1608, to demonstrate the effectiveness of AEG and to showcase how
evaluations are conducted for research on emotion-based MIR. Directions of
future work are also discussed.Comment: 40 pages, 18 figures, 5 tables, author versio
Clustering of Musical Pieces through Complex Networks: an Assessment over Guitar Solos
Musical pieces can be modeled as complex networks. This fosters innovative
ways to categorize music, paving the way towards novel applications in
multimedia domains, such as music didactics, multimedia entertainment and
digital music generation. Clustering these networks through their main metrics
allows grouping similar musical tracks. To show the viability of the approach,
we provide results on a dataset of guitar solos.Comment: to appear in IEEE Multimedia magazin
Audio Content-Based Music Retrieval
The rapidly growing corpus of digital audio material requires novel
retrieval strategies for exploring large music collections. Traditional retrieval strategies rely on metadata that describe the actual audio content in words. In the case that such textual descriptions are not available, one requires content-based retrieval strategies which only utilize the raw audio material. In this contribution, we discuss content-based retrieval strategies that
follow the query-by-example paradigm: given an audio query, the task is to retrieve all documents that are somehow similar or related to the query from a music collection. Such strategies can be loosely classified according to their "specificity", which refers to the degree of similarity between the query and the database documents. Here, high specificity refers to a strict notion of similarity, whereas low specificity to a rather vague one. Furthermore, we introduce a second classification principle based on "granularity", where one distinguishes between fragment-level and document-level retrieval. Using a classification scheme based on specificity and granularity, we identify various classes of retrieval scenarios, which comprise "audio identification", "audio matching", and "version
identification". For these three important classes, we give an overview of representative state-of-the-art approaches, which also illustrate the sometimes subtle but crucial differences between the retrieval scenarios. Finally, we give an outlook on a user-oriented retrieval system, which combines the various retrieval strategies in a unified framework
- âŠ