9,113 research outputs found

    A Survey of Evaluation in Music Genre Recognition

    Get PDF

    Logic-based Modelling of Musical Harmony for Automatic Characterisation and Classification

    Get PDF
    The copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the authorMusic like other online media is undergoing an information explosion. Massive online music stores such as the iTunes Store1 or Amazon MP32, and their counterparts, the streaming platforms, such as Spotify3, Rdio4 and Deezer5, offer more than 30 million6 pieces of music to their customers, that is to say anybody with a smart phone. Indeed these ubiquitous devices offer vast storage capacities and cloud-based apps that can cater any music request. As Paul Lamere puts it7: “we can now have a virtually endless supply of music in our pocket. The ‘bottomless iPod’ will have as big an effect on how we listen to music as the original iPod had back in 2001. But with millions of songs to chose from, we will need help finding music that we want to hear [...]. We will need new tools that help us manage our listening experience.” Retrieval, organisation, recommendation, annotation and characterisation of musical data is precisely what the Music Information Retrieval (MIR) community has been working on for at least 15 years (Byrd and Crawford, 2002). It is clear from its historical roots in practical fields such as Information Retrieval, Information Systems, Digital Resources and Digital Libraries but also from the publications presented at the first International Symposium on Music Information Retrieval in 2000 that MIR has been aiming to build tools to help people to navigate, explore and make sense of music collections (Downie et al., 2009). That also includes analytical tools to suppor

    Convolutional Methods for Music Analysis

    Get PDF

    Exploring the Features to Classify the Musical Period of Western Classical Music

    Get PDF
    Music Information Retrieval (MIR) focuses on extracting meaningful information from music content. MIR is a growing field of research with many applications such as music recommendation systems, fingerprinting, query-by-humming or music genre classification. This study aims to classify the styles of Western classical music, as this has not been explored to a great extent by MIR. In particular, this research will evaluate the impact of different music characteristics on identifying the musical period of Baroque, Classical, Romantic and Modern. In order to easily extract features related to music theory, symbolic representation or music scores were used, instead of audio format. A collection of 870 Western classical music piano scores was downloaded from different sources such as KernScore library (humdrum format) or the Musescore community (MusicXML format). Several global features were constructed by parsing the files and accessing the symbolic information, including notes and duration. These features include melodic intervals, chord types, pitch and rhythm histograms and were based on previous studies and music theory research. Using a radial kernel support vector machine algorithm, different classification models were created to analyse the contribution of the main musical properties: rhythm, pitch, harmony and melody. The study findings revealed that the harmony features were significant predictors of the music styles. The research also confirmed that the musical styles evolved gradually and that the changes in the tonal system through the years, appeared to be the most significant change to identify the styles. This is consistent with the findings of other researchers. The overall accuracy of the model using all the available features achieved an accuracy of 84.3%. It was found that of the four periods studied, it was most difficult to classify music from the Modern period

    ShredGP: Guitarist Style-Conditioned Tablature Generation

    Full text link
    GuitarPro format tablatures are a type of digital music notation that encapsulates information about guitar playing techniques and fingerings. We introduce ShredGP, a GuitarPro tablature generative Transformer-based model conditioned to imitate the style of four distinct iconic electric guitarists. In order to assess the idiosyncrasies of each guitar player, we adopt a computational musicology methodology by analysing features computed from the tokens yielded by the DadaGP encoding scheme. Statistical analyses of the features evidence significant differences between the four guitarists. We trained two variants of the ShredGP model, one using a multi-instrument corpus, the other using solo guitar data. We present a BERT-based model for guitar player classification and use it to evaluate the generated examples. Overall, results from the classifier show that ShredGP is able to generate content congruent with the style of the targeted guitar player. Finally, we reflect on prospective applications for ShredGP for human-AI music interaction.Comment: Accepted for publication at CMMR 202

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201
    corecore