2,137 research outputs found

    Techniques for generative melodies inspired by music cognition

    Get PDF
    This article presents a series of algorithmic techniques for melody generation, inspired by models of music cognition. The techniques are designed for interactive composition, and so privilege brevity, simplicity, and flexibility over fidelity to the underlying models. The cognitive models canvassed span gestalt, preference rule, and statistical learning perspectives; this is a diverse collection with a common thread—the centrality of “expectations” to music cognition. We operationalize some recurrent themes across this collection as probabilistic descriptions of melodic tendency, codifying them as stochastic melody-generation techniques. The techniques are combined into a concise melody generator, with salient parameters exposed for ready manipulation in real time. These techniques may be especially relevant to algorithmic composers, the live-coding community, and to music psychologists and theorists interested in how computational interpretations of cognitive models “sound” in practice

    Generation of folk song melodies using Bayes transforms

    Get PDF
    The paper introduces the `Bayes transform', a mathematical procedure for putting data into a hierarchical representation. Applicable to any type of data, the procedure yields interesting results when applied to sequences. In this case, the representation obtained implicitly models the repetition hierarchy of the source. There are then natural applications to music. Derivation of Bayes transforms can be the means of determining the repetition hierarchy of note sequences (melodies) in an empirical and domain-general way. The paper investigates application of this approach to Folk Song, examining the results that can be obtained by treating such transforms as generative models

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    AI Methods in Algorithmic Composition: A Comprehensive Survey

    Get PDF
    Algorithmic composition is the partial or total automation of the process of music composition by using computers. Since the 1950s, different computational techniques related to Artificial Intelligence have been used for algorithmic composition, including grammatical representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint programming and evolutionary algorithms. This survey aims to be a comprehensive account of research on algorithmic composition, presenting a thorough view of the field for researchers in Artificial Intelligence.This study was partially supported by a grant for the MELOMICS project (IPT-300000-2010-010) from the Spanish Ministerio de Ciencia e Innovación, and a grant for the CAUCE project (TSI-090302-2011-8) from the Spanish Ministerio de Industria, Turismo y Comercio. The first author was supported by a grant for the GENEX project (P09-TIC- 5123) from the Consejería de Innovación y Ciencia de Andalucía

    A computational framework for aesthetical navigation in musical search space

    Get PDF
    Paper presented at 3rd AISB symposium on computational creativity, AISB 2016, 4-6th April, Sheffield. Abstract. This article addresses aspects of an ongoing project in the generation of artificial Persian (-like) music. Liquid Persian Music software (LPM) is a cellular automata based audio generator. In this paper LPM is discussed from the view point of future potentials of algorithmic composition and creativity. Liquid Persian Music is a creative tool, enabling exploration of emergent audio through new dimensions of music composition. Various configurations of the system produce different voices which resemble musical motives in many respects. Aesthetical measurements are determined by Zipf’s law in an evolutionary environment. Arranging these voices together for producing a musical corpus can be considered as a search problem in the LPM outputs space of musical possibilities. On this account, the issues toward defining the search space for LPM is studied throughout this paper

    Of Flesh and Steel:Computational Creativity in Music and the Body Issue

    Get PDF
    Could machines ever take our place in the creation of art, and particularly music? The outstanding results of some well-known AIs (e.g. EMI, Flow Machines) might make us believe that this is the case. However, despite this evidence it seems that machines present some intrinsic limits both in creative and non-creative contexts (already highlighted by John Searle and the debate around mechanism). The arguments of this paper are centred around this very belief: we are convinced that the utopian claims regarding all-round machine intelligence are not plausible and that our attention should be directed towards more relevant issues in the field of computational creativity. In particular, we focus our attention on what we call the “body issue”, i.e. the role of the body in the experience and creation of music, that we consider problematic for the idea of a truly creative machine (even if we take into consideration weaker renditions of artificial intelligence). Our argument is based on contemporary findings in neuroscience (especially on embodied cognition) and on the theories of Maurice Merleau-Ponty and Roland Barthes
    corecore