165 research outputs found

    Music Generation by Deep Learning - Challenges and Directions

    Full text link
    In addition to traditional tasks such as prediction, classification and translation, deep learning is receiving growing attention as an approach for music generation, as witnessed by recent research groups such as Magenta at Google and CTRL (Creator Technology Research Lab) at Spotify. The motivation is in using the capacity of deep learning architectures and training techniques to automatically learn musical styles from arbitrary musical corpora and then to generate samples from the estimated distribution. However, a direct application of deep learning to generate content rapidly reaches limits as the generated content tends to mimic the training set without exhibiting true creativity. Moreover, deep learning architectures do not offer direct ways for controlling generation (e.g., imposing some tonality or other arbitrary constraints). Furthermore, deep learning architectures alone are autistic automata which generate music autonomously without human user interaction, far from the objective of interactively assisting musicians to compose and refine music. Issues such as: control, structure, creativity and interactivity are the focus of our analysis. In this paper, we select some limitations of a direct application of deep learning to music generation, analyze why the issues are not fulfilled and how to address them by possible approaches. Various examples of recent systems are cited as examples of promising directions.Comment: 17 pages. arXiv admin note: substantial text overlap with arXiv:1709.01620. Accepted for publication in Special Issue on Deep learning for music and audio, Neural Computing & Applications, Springer Nature, 201

    The Skipping Behavior of Users of Music Streaming Services and its Relation to Musical Structure

    Full text link
    The behavior of users of music streaming services is investigated from the point of view of the temporal dimension of individual songs; specifically, the main object of the analysis is the point in time within a song at which users stop listening and start streaming another song ("skip"). The main contribution of this study is the ascertainment of a correlation between the distribution in time of skipping events and the musical structure of songs. It is also shown that such distribution is not only specific to the individual songs, but also independent of the cohort of users and, under stationary conditions, date of observation. Finally, user behavioral data is used to train a predictor of the musical structure of a song solely from its acoustic content; it is shown that the use of such data, available in large quantities to music streaming services, yields significant improvements in accuracy over the customary fashion of training this class of algorithms, in which only smaller amounts of hand-labeled data are available

    Maximum entropy models capture melodic styles

    Full text link
    We introduce a Maximum Entropy model able to capture the statistics of melodies in music. The model can be used to generate new melodies that emulate the style of the musical corpus which was used to train it. Instead of using the nn-body interactions of (n1)(n-1)-order Markov models, traditionally used in automatic music generation, we use a kk-nearest neighbour model with pairwise interactions only. In that way, we keep the number of parameters low and avoid over-fitting problems typical of Markov models. We show that long-range musical phrases don't need to be explicitly enforced using high-order Markov interactions, but can instead emerge from multiple, competing, pairwise interactions. We validate our Maximum Entropy model by contrasting how much the generated sequences capture the style of the original corpus without plagiarizing it. To this end we use a data-compression approach to discriminate the levels of borrowing and innovation featured by the artificial sequences. The results show that our modelling scheme outperforms both fixed-order and variable-order Markov models. This shows that, despite being based only on pairwise interactions, this Maximum Entropy scheme opens the possibility to generate musically sensible alterations of the original phrases, providing a way to generate innovation

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Régularité, génération de documents, et Cyc

    Get PDF
    Nous nous intéressons à la modélisation des réseaux hiérarchiques, et avons développé un modèle de hiérarchies sémantiques basé sur la RÉGULARITÉ, une généralisation de l’héritage [MIL 88a]. Nous nous intéressons également à la génération de documents séquentiels structurés à partir de documents hypertextes en utilisant la sémantique des liens hypertextes pour structurer la présentation [MIL 90b]. Nous avons acquis une copie de la base de connaissances CYC [LEN 90a] dans le but de: 1) utiliser le réseau sémantique sous-jacent à CYC pour aider à la génération de textes, et 2) tester\ud l’hypothèse de la régularité. Ironiquement, la taille gigantesque de CYC a forcé ses concepteurs d’adopter des optimisations d’implantation qui la rendent peu adaptée aux explorations logiques profondes requises par la génération de textes. Par ailleurs, l’étude des patrons de régularité dans CYC nous a amené à généraliser la notion de régularité, et à formuler un certain nombre d’hypothèses quant à la structure logique de la base de connaissances

    An Object-Oriented Representation of Pitch- Classes, Intervals, Scales and Chords

    Get PDF
    Le système MusES a comme objectif de représenter les connaissances musicales nécessaires à l'analyse harmonique de séquences d'accords en musique tonale. Nous décrivons ici la première couche du système qui propose une représentation opérationnelle des notes et de leur algèbre, ainsi que des intervalles, gammes et accords. Cette représentation a comme particularité de prendre en compte les problèmes d'enharmonie, i.e. de différencier les notes équivalentes comme Do# et Réb. Cette première couche est utilisée pour l'étude de mécanismes d'analyse harmonique et peut être considérée comme une ontologie des concepts de base de l'harmonie. Le but de ce document est aussi de proposer un exemple non trivial d'application de Smalltalk-80 à l'usage des musiciens désirant se lancer dans la programmation par objets. Un document plus détaillé sur le système est disponible sur demande.The MusES system is intended to provide an explicit representation of musical knowledge involved in tonal music chord sequences analysis. We describe in this paper the first layer of the system, which provides an operational representation of pitch classes and their algebra, as well as standard calculus on scales, intervals and chords. The proposed representation takes enharmonic spelling into account, i.e differentiates between equivalent pitch-classes (e.g. C# and Db). This first layer is intended to provide a solid foundation for musical symbolic knowledge-based systems. As such, it provides an ontology to describe the basic units of harmony. This first layer of the MusES system may also be used as a pedagogical example for those wishing to apply object-oriented techniques to musical knowledge representation. A document describing the system in full details is available on request

    Real-Time Score Notation from Raw MIDI Inputs

    Get PDF
    This paper describes tools designed and experiments conducted in the context of MIROR, a European project investigating adaptive systems for early childhood music education based on the paradigm of reflexive interaction. In MIROR, music notation is used as the trace of both the user and the system activity, produced from MIDI instruments. The task of displaying such raw MIDI inputs and outputs is difficult as no a priori information is known concerning the underlying tempo or metrical structure. We describe here a completely automatic processing chain from the raw MIDI input to a fully-fledge music notation. The low level music description is first converted in a score level description and then automatically rendered as a graphic score. The whole process is operating in real-time. The paper describes the various conversion steps and issues, including extensions to support score annotations. The process is validated using about 30,000 musical sequences gathered from MIROR experiments and made available for public use
    corecore