2 research outputs found

    Categorización de letras de canciones de un portal web usando agrupación

    Get PDF
    Algoritmos de clasificación y de agrupación han sido usados ampliamente en sistemas de recuperación de información musical (MIR) para organizar repositorios musicales en categorías o grupos relacionados, por ejemplo género, modo o tema, usando el sonido o sonido en combinación con la letra de la canción. Sin embargo, la investigación relacionada con agrupación usando solamente la letra de la canción es poca. El objetivo principal de este trabajo es definir un modelo no supervisado de minería de datos para la agrupación de letras de canciones recopiladas en un portal web, usando solamente características de la letra de la canción, con el fin de ofrecer mejores opciones de búsqueda a los usuarios del portal. El modelo propuesto primero identifica el lenguaje de las letras de canciones usando Naive Bayes y n-grams (para el caso de este trabajo se identificaron 30.000 letras de canciones en Español y 30.000 en Ingles). Luego las letras son representadas en un modelo de espacio vectorial Bag OfWords (BOW), usando características de Part Of Speech (POS) y transformando los datos al formato TF-IDF. Posteriormente, se estima el numero apropiado de agrupaciones (K) y se usan algoritmos particionales y jerárquicos con el _n de obtener los grupos diferenciados de letras de canciones. Para evaluar los resultados de cada agrupación se usan medidas como el índice Davies Bouldin (DBI) y medidas internas y externas de similaridad de los grupos. Finalmente, los grupos se etiquetan usando palabras frecuentes y reglas de asociación identificadas en cada grupo. Los experimentos realizados muestran que la música puede ser organizada en grupos relacionados como género, modo, sentimientos y temas, la cual puede ser etiquetada con técnicas no supervisadas usando solamente la información de la letra de la canción.Abstract. Classification and clustering algorithms have been applied widely in Music Information Retrieval (MIR) to organize music repositories in categories or clusters, like genre, mood or topic, using sound or sound with lyrics. However, clustering related research using lyrics information only is not much. The main goal of this work is to define an unsupervised text mining model for grouping lyrics compiled in a website, using lyrics features only, in order to offer better search options to the website users. The proposal model first performs a language identification for lyrics using Nafive Bayes and n-grams (for this work 30.000 lyrics in Spanish and 30.000 in English were identifed). Next lyrics are represented in a vector space model Bag Of Words (BOW), using Part Of Speech (POS) features and transforming data to TF-IDF format. Then, the appropriate number of clusters (K) is estimated and partitional and hierarchical methods are used to perform clustering. For evaluating the clustering results, some measures are used such as Davies Bouldin Index (DBI), intra similarity and inter similarity measures. At last, the final clusters are tagged using top words and association rules per group. Experiments show that music could be organized in related groups as genre, mood, sentiment and topic, and tagged with unsupervised techniques using only lyrics information.Maestrí

    Multi-modal music information retrieval - visualisation and evaluation of clusterings by both audio and lyrics

    No full text
    Navigation in and access to the contents of digital audio archives have become increasingly important topics in Information Retrieval. Both private and commercial music collections are growing both in terms of size and acceptance in the user community. Content based approaches relying on signal processing techniques have been used in Music Information Retrieval for some time to represent the acoustic characteristics of pieces of music, which may be used for collection organisation or retrieval tasks. However, music is not defined by acoustic characteristics only, but also, sometimes even to a large degree, by its contents in terms of lyrics. A song’s lyrics provide more information to search for or may be more representative of specific musical genres than the acoustic content, e.g. ‘love songs ’ or ‘Christmas carols’. We therefore suggest an improved indexing of audio files by two modalities. Combinations of audio features and song lyrics can be used to organise audio collections and to display them via map based interfaces. Specifically, we use Self-Organising Maps as visualisation and interface metaphor. Separate maps are created and linked to provide a multi-modal view of an audio collection. Moreover, we introduce quality measures for quantitative validation of cluster spreads across the resulting multiple topographic mappings provided by the Self-Organising Maps
    corecore