3 research outputs found

    Quantify music artist similarity based on style and mood

    No full text
    Music artist similarity has been an active research topic in music information retrieval for a long time since it is especially useful for music recommendation and organization. However, it is a difficult problem. The similarity varies significantly due to different artistic aspects considered and most importantly, it is hard to quantify. In this paper, we propose a new framework for quantifying artist similarity. In the framework, we focus on style and mood aspects of artists whose descriptions are extracted from the authoritative information available at the All Music Guide website. We then generate style--mood joint taxonomies using hierarchical co-clustering algorithm, and quantify the semantic similarities between the style/mood terms based on the taxonomy structure and the positions of these terms in the taxonomies. Finally we calculate the artist similarities according to all the style/mood terms used to describe them. Experiments are conducted to show the effectiveness of our framework

    Interactive Latent Space for Mood-Based Music Recommendation

    Get PDF
    The way we listen to music has been changing fundamentally in past two decades with the increasing availability of digital recordings and portability of music players. Up to date research in music recommendation attracted millions of users to online, music streaming services, containing tens of millions of tracks (e.g. Spotify, Pandora). The main focus of up to date research in recommender systems has been algorithmic accuracy and optimization of ranking metrics. However, recent work has highlighted the importance of other aspects of the recommendation process, including explanation, transparency, control and user experience in general. Building on these aspects, this dissertation explores user interaction, control and visual explanation of music related mood metadata during recommendation process. It introduces a hybrid recommender system that suggests music artists by combining mood-based and audio content filtering in a novel interactive interface. The main vehicle for exploration and discovery in music collection is a novel visualization that maps moods and artists in the same, latent space, built upon reduced dimensions of high-dimensional artist-mood associations. It is not known what the reduced dimensions represent and this work uses hierarchical mood model to explain the constructed space. Results of two user studies, with over 200 participants each, show that visualization and interaction in a latent space improves acceptance and understanding of both metadata and item recommendations. However, too much of either can result in cognitive overload and a negative impact on user experience. The proposed visual mood space and interactive features, along with the aforementioned findings, aim to inform design of future interactive recommendation systems

    User-centric Music Information Retrieval

    Get PDF
    The rapid growth of the Internet and the advancements of the Web technologies have made it possible for users to have access to large amounts of on-line music data, including music acoustic signals, lyrics, style/mood labels, and user-assigned tags. The progress has made music listening more fun, but has raised an issue of how to organize this data, and more generally, how computer programs can assist users in their music experience. An important subject in computer-aided music listening is music retrieval, i.e., the issue of efficiently helping users in locating the music they are looking for. Traditionally, songs were organized in a hierarchical structure such as genre-\u3eartist-\u3ealbum-\u3etrack, to facilitate the users’ navigation. However, the intentions of the users are often hard to be captured in such a simply organized structure. The users may want to listen to music of a particular mood, style or topic; and/or any songs similar to some given music samples. This motivated us to work on user-centric music retrieval system to improve users’ satisfaction with the system. The traditional music information retrieval research was mainly concerned with classification, clustering, identification, and similarity search of acoustic data of music by way of feature extraction algorithms and machine learning techniques. More recently the music information retrieval research has focused on utilizing other types of data, such as lyrics, user access patterns, and user-defined tags, and on targeting non-genre categories for classification, such as mood labels and styles. This dissertation focused on investigating and developing effective data mining techniques for (1) organizing and annotating music data with styles, moods and user-assigned tags; (2) performing effective analysis of music data with features from diverse information sources; and (3) recommending music songs to the users utilizing both content features and user access patterns
    corecore