2,959 research outputs found

    Features for the classification and clustering of music in symbolic format

    Get PDF
    Tese de mestrado, Engenharia Informática, Universidade de Lisboa, Faculdade de Ciências, 2008Este documento descreve o trabalho realizado no âmbito da disciplina de Projecto em Engenharia Informática do Mestrado em Engenharia Informática da Faculdade de Ciências da Universidade de Lisboa. Recuperação de Informação Musical é, hoje em dia, um ramo altamente activo de investigação e desenvolvimento na área de ciência da computação, e incide em diversos tópicos, incluindo a classificação musical por géneros. O trabalho apresentado centra-se na Classificação de Pistas e de Géneros de música armazenada usando o formato MIDI. Para resolver o problema da classificação de pistas MIDI, extraimos um conjunto de descritores que são usados para treinar um classificador implementado através de uma técnica de Máquinas de Aprendizagem, Redes Neuronais, com base nas notas, e durações destas, que descrevem cada faixa. As faixas são classificadas em seis categorias: Melody (Melodia), Harmony (Harmonia), Bass (Baixo) e Drums (Bateria). Para caracterizar o conteúdo musical de cada faixa, um vector de descritores numérico, normalmente conhecido como ”shallow structure description”, é extraído. Em seguida, eles são utilizados no classificador — Neural Network — que foi implementado no ambiente Matlab. Na Classificação por Géneros, duas propostas foram usadas: Modelação de Linguagem, na qual uma matriz de transição de probabilidades é criada para cada tipo de pista midi (Melodia, Harmonia, Baixo e Bateria) e também para cada género; e Redes Neuronais, em que um vector de descritores numéricos é extraído de cada pista, e é processado num Classificador baseado numa Rede Neuronal. Seis Colectâneas de Musica no formato Midi, de seis géneros diferentes, Blues, Country, Jazz, Metal, Punk e Rock, foram formadas para efectuar as experiências. Estes géneros foram escolhidos por partilharem os mesmos instrumentos, na sua maioria, como por exemplo, baixo, bateria, piano ou guitarra. Estes géneros também partilham algumas características entre si, para que a classificação não seja trivial, e para que a robustez dos classificadores seja testada. As experiências de Classificação de Pistas Midi, nas quais foram testados, numa primeira abordagem, todos os descritores, e numa segunda abordagem, os melhores descritores, mostrando que o uso de todos os descritores é uma abordagem errada, uma vez que existem descritores que confundem o classificador. Provou-se que a melhor maneira, neste contexto, de se classificar estas faixas MIDI é utilizar descritores cuidadosamente seleccionados. As experiências de Classificação por Géneros, mostraram que os Classificadores por Instrumentos (Single-Instrument) obtiveram os melhores resultados. Quatro géneros, Jazz, Country, Metal e Punk, obtiveram resultados de classificação com sucesso acima dos 80% O trabalho futuro inclui: algoritmos genéticos para a selecção de melhores descritores; estruturar pistas e musicas; fundir todos os classificadores desenvolvidos num único classificador.This document describes the work carried out under the discipline of Computing Engineering Project of the Computer Engineering Master, Sciences Faculty of the Lisbon University. Music Information Retrieval is, nowadays, a highly active branch of research and development in the computer science field, and focuses several topics, including music genre classification. The work presented in this paper focus on Track and Genre Classification of music stored using MIDI format, To address the problem of MIDI track classification, we extract a set of descriptors that are used to train a classifier implemented by a Neural Network, based on the pitch levels and durations that describe each track. Tracks are classified into four classes: Melody, Harmony, Bass and Drums. In order to characterize the musical content from each track, a vector of numeric descriptors, normally known as shallow structure description, is extracted. Then they are used as inputs for the classifier which was implemented in the Matlab environment. In the Genre Classification task, two approaches are used: Language Modeling, in which a transition probabilities matrix is created for each type of track (Melody, Harmony, Bass and Drums) and also for each genre; and an approach based on Neural Networks, where a vector of numeric descriptors is extracted from each track (Melody, Harmony, Bass and Drums) and fed to a Neural Network Classifier. Six MIDI Music Corpora were assembled for the experiments, from six different genres, Blues, Country, Jazz, Metal, Punk and Rock. These genres were selected because all of them have the same base instruments, such as bass, drums, piano or guitar. Also, the genres chosen share some characteristics between them, so that the classification isn’t trivial, and tests the classifiers robustness. Track Classification experiments using all descriptors and best descriptors were made, showing that using all descriptors is a wrong approach, as there are descriptors which confuse the classifier. Using carefully selected descriptors proved to be the best way to classify these MIDI tracks. Genre Classification experiments showed that the Single-Instrument Classifiers achieved the best results. Four genres achieved higher than 80% success rates: Jazz, Country, Metal and Punk. Future work includes: genetic algorithms; structurize tracks and songs; merge all presented classifiers into one full Automatic Genre Classification System

    Logic-based Modelling of Musical Harmony for Automatic Characterisation and Classification

    Get PDF
    The copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the authorMusic like other online media is undergoing an information explosion. Massive online music stores such as the iTunes Store1 or Amazon MP32, and their counterparts, the streaming platforms, such as Spotify3, Rdio4 and Deezer5, offer more than 30 million6 pieces of music to their customers, that is to say anybody with a smart phone. Indeed these ubiquitous devices offer vast storage capacities and cloud-based apps that can cater any music request. As Paul Lamere puts it7: “we can now have a virtually endless supply of music in our pocket. The ‘bottomless iPod’ will have as big an effect on how we listen to music as the original iPod had back in 2001. But with millions of songs to chose from, we will need help finding music that we want to hear [...]. We will need new tools that help us manage our listening experience.” Retrieval, organisation, recommendation, annotation and characterisation of musical data is precisely what the Music Information Retrieval (MIR) community has been working on for at least 15 years (Byrd and Crawford, 2002). It is clear from its historical roots in practical fields such as Information Retrieval, Information Systems, Digital Resources and Digital Libraries but also from the publications presented at the first International Symposium on Music Information Retrieval in 2000 that MIR has been aiming to build tools to help people to navigate, explore and make sense of music collections (Downie et al., 2009). That also includes analytical tools to suppor

    Musical Genre Identification and Differentiation of Rock, R&B/Hip-Hop, and Christian Songs Through Harmonic Analysis

    Get PDF
    This thesis attempts to identify and distinguish musical genre through harmonic analysis. The genres of Rock, R&B/Hip-Hop, and Christian have been selected for this study. The top ten songs from each genre (as listed by Billboard’s Year End Charts) are analyzed and contrasted with those of other genres in an attempt to prove that harmonic analysis alone is sufficient to identify the genre of an unknown song. Heavy in analysis, this thesis will find structure in music and use that structure to more deeply appreciate not only the study of genre, but of music itself

    The Harmonic Walk : an interactive physical environment to learn tonal melody accompaniment

    Get PDF
    TheHarmonic Walkis an interactive physical environment designed for learning and practicing the accompaniment of a tonal melody. Employing a highly innovative multimedia system, the application offers to the user the possibility of getting in touch with some fundamental tonal music features in a very simple and readily available way. Notwithstanding tonal music is very common in our lives, unskilled people as well as music students and even professionals are scarcely conscious of what these features actually are. TheHarmonic Walk, through the body movement in space, can provide all these users a live experience of tonal melody structure, chords progressions, melody accompaniment, and improvisation. Enactive knowledge and embodied cognition allow the user to build an inner map of these musical features, which can be acted by moving on the active surface with a simple step. Thorough assessment tests with musicians and nonmusicians high school students could prove the high communicative power and efficiency of theHarmonic Walkapplication both in improving musical knowledge and in accomplishing complex musical tasks

    The rhythmic, harmonic and phrasing language of Lennie Tristano: Analysis and strategies for incorporation in modern jazz improvisation

    Get PDF
    Pianist Lennie Tristano is regarded as one of the most original voices in jazz history. His uncompromising adherence to artistic ideals led to both great innovation and obscurity, limiting his impact on future generations of musicians. Until recently, his music has been largely overlooked by musicians and academics alike. Coinciding with a revival in interest in Tristano and his music by modern jazz musicians, this dissertation seeks to investigate the transferability of several aspects of Tristano’s rhythmic, harmonic and phrasing vocabulary to modern jazz improvisation. Building upon existing literature concerning his pedagogy and musical analysis, this research emphasises Tristano’s relevance to modern jazz musicians. Through transcription analysis of two of Tristano’s compositions, Lennie’s Pennies and 317 East 32nd Street as well as Line Up, an improvisation from the 1955 self-titled album, Tristano’s approach to rhythm, harmony and phrasing is discussed and several idiosyncratic devices are drawn out for use in the following section of the thesis. These devices include diminution/augmentation, polyrhythm, asymmetrical rhythmic grouping, manipulation of harmonic rhythm, chromaticism, reharmonisation, asymmetrical phrasing, and extended phrase length within an improvised line. Following on from the analysis, the final chapter investigates the transferability of the devices to modern jazz improvisation. A systematic approach to practicing selected devices with the intention of applying them in modern jazz improvisation is developed. These strategies are designed to allow modern jazz musicians to see the relevance of Tristano’s style and facilitate the incorporation of his rhythmic, harmonic and phrasing devices in new settings. Approaching Tristano’s music in this way sustains a model of recontextualising the music of the past to bring innovation in the music of the present

    Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model

    Full text link
    Numerous studies in the field of music generation have demonstrated impressive performance, yet virtually no models are able to directly generate music to match accompanying videos. In this work, we develop a generative music AI framework, Video2Music, that can match a provided video. We first curated a unique collection of music videos. Then, we analysed the music videos to obtain semantic, scene offset, motion, and emotion features. These distinct features are then employed as guiding input to our music generation model. We transcribe the audio files into MIDI and chords, and extract features such as note density and loudness. This results in a rich multimodal dataset, called MuVi-Sync, on which we train a novel Affective Multimodal Transformer (AMT) model to generate music given a video. This model includes a novel mechanism to enforce affective similarity between video and music. Finally, post-processing is performed based on a biGRU-based regression model to estimate note density and loudness based on the video features. This ensures a dynamic rendering of the generated chords with varying rhythm and volume. In a thorough experiment, we show that our proposed framework can generate music that matches the video content in terms of emotion. The musical quality, along with the quality of music-video matching is confirmed in a user study. The proposed AMT model, along with the new MuVi-Sync dataset, presents a promising step for the new task of music generation for videos
    corecore