13 research outputs found

    The Topology of Music Recommendation Networks

    Full text link
    We study the topology of several music recommendation networks, which rise from relationships between artist, co-occurrence of songs in playlists or experts' recommendation. The analysis uncovers the emergence of complex network phenomena in this kind of recommendation networks, built considering artists as nodes and their resemblance as links. We observe structural properties that provide some hints on navigation and possible optimizations on the design of music recommendation systems. Finally, the analysis derived from existing music knowledge sources provides a deeper understanding of the human music similarity perceptions.Comment: 15 pages, 3 figure

    The Social Network of Contemporary Popular Musicians

    Full text link
    In this paper we analyze two social network datasets of contemporary musicians constructed from allmusic.com (AMG), a music and artists' information database: one is the collaboration network in which two musicians are connected if they have performed in or produced an album together, and the other is the similarity network in which they are connected if they where musically similar according to music experts. We find that, while both networks exhibit typical features of social networks such as high transitivity, several key network features, such as degree as well as betweenness distributions suggest fundamental differences in music collaborations and music similarity networks are created.Comment: 7 pages, 2 figure

    An Industrial-strength content-based music recommendation system

    No full text
    We present a metadata free system for the interaction with massive collections of music, the MusicSurfer. MusicSurfer automatically extracts descriptions related to instrumentation, rhythm and harmony from music audio signals. Together with efficient similarity metrics, the descriptions allow navigation of multimillion track music collections in a flexible and efficient way without the need of metadata or human ratings.This work is partially funded by the European Union to the SIMAC IST-FP6-507142 project (http://www.semanticaudio.org)

    Content-based music audio recommendation

    No full text
    Additional authors: José Pedro Garcia, Thomas Aussenac, Ricard Marxer, Jaume Masip, Òscar Celma, David García, Emilia Gómez, Fabien Gouyon, Enric Guaus, Perfecto Herrera, Jordi Masseguer, Beesuan Ong, Miguel Ramirez, Sebastian Streich and Xavier Serra.We present the MusicSurfer, a metadata free system for the interaction with massive collections of music. MusicSurfer automatically extracts descriptions related to instrumentation, rhythm and harmony from music audio signals. Together with efficient similarity metrics, the descriptions allow navigation of multimillion track music collections in a flexible and efficient way without the need for metadata nor human ratings

    An Industrial-strength content-based music recommendation system

    No full text
    We present a metadata free system for the interaction with massive collections of music, the MusicSurfer. MusicSurfer automatically extracts descriptions related to instrumentation, rhythm and harmony from music audio signals. Together with efficient similarity metrics, the descriptions allow navigation of multimillion track music collections in a flexible and efficient way without the need of metadata or human ratings.This work is partially funded by the European Union to the SIMAC IST-FP6-507142 project (http://www.semanticaudio.org)

    Content-based music audio recommendation

    No full text
    Additional authors: José Pedro Garcia, Thomas Aussenac, Ricard Marxer, Jaume Masip, Òscar Celma, David García, Emilia Gómez, Fabien Gouyon, Enric Guaus, Perfecto Herrera, Jordi Masseguer, Beesuan Ong, Miguel Ramirez, Sebastian Streich and Xavier Serra.We present the MusicSurfer, a metadata free system for the interaction with massive collections of music. MusicSurfer automatically extracts descriptions related to instrumentation, rhythm and harmony from music audio signals. Together with efficient similarity metrics, the descriptions allow navigation of multimillion track music collections in a flexible and efficient way without the need for metadata nor human ratings

    Topology of music recommendation networks

    No full text
    We introduce the concept of decision cost of a spatial graph, which measures the disorder of a given network taking into account not only the connections between nodes but their position in a two-dimensional map. The influence of the network size is evaluated and we show that normalization of the decision cost allows us to compare the degree of disorder of networks of different sizes. Under this framework, we measure the disorder of the connections between airports of two different countries and obtain some conclusions about which of them is more disordered. The introduced concepts (decision cost and disorder of spatial networks) can easily be extended to Euclidean networks of higher dimensions, and also to networks whose nodes have a certain fitness property (i.e., one-dimensional). When analyzing structures tailored by men, complex networks theory has been a useful instrument in order to study their complexity. In many works related to this field, the projected network is obtained by detecting connections between their fundamental units (i.e., nodes), disregarding any information about their spatial distribution. In this paper, we study the importance of the Euclidean position of nodes for a key property of the network: the decision cost, that is, the difficulty for a blind agent to make its way from a starting node to a target node. We demonstrate how this parameter is a good indicator of the disorder of a spatial net, and we apply this measure to two different airport networks.Financial support was provided by MCyT-FEDER Spain, Project Nos. BFM2002-04369 and BFM2003-07850, the Generalitat de Catalunya, and SIMAC IST-FP6-507142 European project

    The Complex network of musical tastes

    No full text
    We present an empirical study of the evolution of a social network constructed under the influence of musical tastes. The network is obtained thanks to the selfless effort of a broad community of users who share playlists of their favourite songs with other users. When two songs co-occur in a playlist a link is created between them, leading to a complex network where songs are the fundamental nodes. In this representation, songs in the same playlist could belong to different musical genres, but they are prone to be linked by a certain musical taste (e.g. if songs A and B co-occur in several playlists, an user who likes A will probably like also B). Indeed, playlist collections such as the one under study are the basic material that feeds some commercial music recommendation engines. Since playlists have an input date, we are able to evaluate the topology of this particular complex network from scratch, observing how its characteristic parameters evolve in time. We compare our results with those obtained from an artificial network defined by means of a null model. This comparison yields some insight on the evolution and structure of such a network, which could be used as ground data for the development of proper models. Finally, we gather information that can be useful for the development of music recommendation engines and give some hints about how top-hits appear.JMB acknowledges VDP Servedio for his help with the Net software [22] and also financial support from MCyT–FEDER (Spain, project BFM2003-07850) and from the Generalitat de Catalunya. SB acknowledges the Yeshaya Horowitz Association through the Center for Complexity Science

    Nearest-neighbor automatic sound classification with a wordNet taxonomy

    No full text
    Sound engineers need to access vast collections of sound efects for their film and video productions. Sound efects providers rely on text-retrieval techniques to offer their collections. Currently, annotation of audio content is done manually, which is an arduous task. Automatic annotation methods, normally fine-tuned to reduced domains such as musical instruments or reduced sound effects taxonomies, are not mature enough for labeling with great detail any possible sound. A general sound recognition tool would require first, a taxonomy that represents the world and, second, thousands of classifiers, each specialized in distinguishing little details. We report experimental results on a general sound annotator. To tackle the taxonomy definition problem we use WordNet, a semantic network that organizes real world knowledge. In order to overcome the need of a huge number of classifiers to distinguish many different sound classes, we use a nearest-neighbor classifier with a database of isolated sounds unambiguously linked to WordNet concepts. A 30% concept prediction is achieved on a database of over 50.000 sounds and over 1600 concepts

    Mucosa: a music content semantic annotator

    No full text
    Comunicació presentada a: ISMIR 2005 6th International Conference on Music Information Retrieval, celebrada de l'11 al 15 de setembre de 2005 a Londres, Regne UnitMUCOSA (Music Content Semantic Annotator) is an environment for the annotation and generation of music metadata at different levels of abstraction. It is composed of three tiers: an annotation client that deals with microannotations (i.e. within-file annotations), a collection tagger, which deals with macro-annotations (i.e. acrossfiles annotations), and a collaborative annotation subsystem, which manages large-scale annotation tasks that can be shared among different research centres. The annotation client is an enhanced version of WaveSurfer, a speech annotation tool. The collection tagger includes tools for automatic generation of unary descriptors, invention of new descriptors, and propagation of descriptors across sub-collections or playlists. Finally, the collaborative annotation subsystem, based on Plone, makes possible to share the annotation chores and results between several research institutions. A collection of annotated songs is available, as a “starter pack” to all the individuals or institutions that are eager to join this initiative.The research and development reported here was partially funded by the EU-FP6-IST-507142 project SIMAC (Semantic Interaction with Music Audio Contents) project. The authors would like to thank Edgar Barroso, and the Audioclas and CLAM teams for their support to the project
    corecore