1,225 research outputs found

    AI Methods in Algorithmic Composition: A Comprehensive Survey

    Get PDF
    Algorithmic composition is the partial or total automation of the process of music composition by using computers. Since the 1950s, different computational techniques related to Artificial Intelligence have been used for algorithmic composition, including grammatical representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint programming and evolutionary algorithms. This survey aims to be a comprehensive account of research on algorithmic composition, presenting a thorough view of the field for researchers in Artificial Intelligence.This study was partially supported by a grant for the MELOMICS project (IPT-300000-2010-010) from the Spanish Ministerio de Ciencia e Innovación, and a grant for the CAUCE project (TSI-090302-2011-8) from the Spanish Ministerio de Industria, Turismo y Comercio. The first author was supported by a grant for the GENEX project (P09-TIC- 5123) from the Consejería de Innovación y Ciencia de Andalucía

    A Functional Taxonomy of Music Generation Systems

    Get PDF
    Digital advances have transformed the face of automatic music generation since its beginnings at the dawn of computing. Despite the many breakthroughs, issues such as the musical tasks targeted by different machines and the degree to which they succeed remain open questions. We present a functional taxonomy for music generation systems with reference to existing systems. The taxonomy organizes systems according to the purposes for which they were designed. It also reveals the inter-relatedness amongst the systems. This design-centered approach contrasts with predominant methods-based surveys and facilitates the identification of grand challenges to set the stage for new breakthroughs.Comment: survey, music generation, taxonomy, functional survey, survey, automatic composition, algorithmic compositio

    Music as complex emergent behaviour : an approach to interactive music systems

    Get PDF
    Access to the full-text thesis is no longer available at the author's request, due to 3rd party copyright restrictions. Access removed on 28.11.2016 by CS (TIS).Metadata merged with duplicate record (http://hdl.handle.net/10026.1/770) on 20.12.2016 by CS (TIS).This is a digitised version of a thesis that was deposited in the University Library. If you are the author please contact PEARL Admin ([email protected]) to discuss options.This thesis suggests a new model of human-machine interaction in the domain of non-idiomatic musical improvisation. Musical results are viewed as emergent phenomena issuing from complex internal systems behaviour in relation to input from a single human performer. We investigate the prospect of rewarding interaction whereby a system modifies itself in coherent though non-trivial ways as a result of exposure to a human interactor. In addition, we explore whether such interactions can be sustained over extended time spans. These objectives translate into four criteria for evaluation; maximisation of human influence, blending of human and machine influence in the creation of machine responses, the maintenance of independent machine motivations in order to support machine autonomy and finally, a combination of global emergent behaviour and variable behaviour in the long run. Our implementation is heavily inspired by ideas and engineering approaches from the discipline of Artificial Life. However, we also address a collection of representative existing systems from the field of interactive composing, some of which are implemented using techniques of conventional Artificial Intelligence. All systems serve as a contextual background and comparative framework helping the assessment of the work reported here. This thesis advocates a networked model incorporating functionality for listening, playing and the synthesis of machine motivations. The latter incorporate dynamic relationships instructing the machine to either integrate with a musical context suggested by the human performer or, in contrast, perform as an individual musical character irrespective of context. Techniques of evolutionary computing are used to optimise system components over time. Evolution proceeds based on an implicit fitness measure; the melodic distance between consecutive musical statements made by human and machine in relation to the currently prevailing machine motivation. A substantial number of systematic experiments reveal complex emergent behaviour inside and between the various systems modules. Music scores document how global systems behaviour is rendered into actual musical output. The concluding chapter offers evidence of how the research criteria were accomplished and proposes recommendations for future research

    Computational Creativity and Music Generation Systems: An Introduction to the State of the Art

    Get PDF
    Computational Creativity is a multidisciplinary field that tries to obtain creative behaviors from computers. One of its most prolific subfields is that of Music Generation (also called Algorithmic Composition or Musical Metacreation), that uses computational means to compose music. Due to the multidisciplinary nature of this research field, it is sometimes hard to define precise goals and to keep track of what problems can be considered solved by state-of-the-art systems and what instead needs further developments. With this survey, we try to give a complete introduction to those who wish to explore Computational Creativity and Music Generation. To do so, we first give a picture of the research on the definition and the evaluation of creativity, both human and computational, needed to understand how computational means can be used to obtain creative behaviors and its importance within Artificial Intelligence studies. We then review the state of the art of Music Generation Systems, by citing examples for all the main approaches to music generation, and by listing the open challenges that were identified by previous reviews on the subject. For each of these challenges, we cite works that have proposed solutions, describing what still needs to be done and some possible directions for further research

    AN APPROACH TO MACHINE DEVELOPMENT OF MUSICAL ONTOGENY

    Get PDF
    This Thesis pursues three main objectives: (i) to use computational modelling to explore how music is perceived, cognitively processed and created by human beings; (ii) to explore interactive musical systems as a method to model and achieve the transmission of musical influence in artificial worlds and between humans and machines; and (iii) to experiment with artificial and alternative developmental musical routes in order to observe the evolution of musical styles. In order to achieve these objectives, this Thesis introduces a new paradigm for the design of computer interactive musical systems called the Ontomemetical Model of Music Evolution - OMME, which includes the fields of musical ontogenesis and memetlcs. OMME-based systems are designed to artificially explore the evolution of music centred on human perceptive and cognitive faculties. The potential of the OMME is illustrated with two interactive musical systems, the Rhythmic Meme Generator (RGeme) and the Interactive Musical Environments (iMe). which have been tested in a series of laboratory experiments and live performances. The introduction to the OMME is preceded by an extensive and critical overview of the state of the art computer models that explore musical creativity and interactivity, in addition to a systematic exposition of the major issues involved in the design and implementation of these systems. This Thesis also proposes innovative solutions for (i) the representation of musical streams based on perceptive features, (ii) music segmentation, (iii) a memory-based music model, (iv) the measure of distance between musical styles, and (v) an impi*ovisation-based creative model

    Towards Machine Musicians Who Have Listened to More Music Than Us: Audio Database-led Algorithmic Criticism for Automatic Composition and Live Concert Systems

    Get PDF
    Databases of audio can form the basis for new algorithmic critic systems, applying techniques from the growing field of music information retrieval to meta-creation in algorithmic composition and interactive music systems. In this article, case studies are described where critics are derived from larger audio corpora. In the first scenario, the target music is electronic art music, and two corpuses are used to train model parameters and then compared with each other and against further controls in assessing novel electronic music composed by a separate program. In the second scenario, a “real-world” application is described, where a “jury” of three deliberately and individually biased algorithmic music critics judged the winner of a dubstep remix competition. The third scenario is a live tool for automated in-concert criticism, based on the limited situation of comparing an improvising pianists' playing to that of Keith Jarrett; the technology overlaps that described in the other systems, though now deployed in real time. Alongside description and analysis of these systems, the wider possibilities and implications are discussed

    Interfacing Jazz: A Study in Computer-Mediated Jazz Music Creation And Performance

    Get PDF
    O objetivo central desta dissertação é o estudo e desenvolvimento de algoritmos e interfaces mediados por computador para performance e criação musical. É sobretudo centrado em acompanhamentos em Jazz clássico e explora um meta-controlo dos parâmetros musicais como forma de potenciar a experiência de tocar Jazz por músicos e não-músicos, quer individual quer coletivamente. Pretende contribuir para a pesquisa existente nas áreas de geração automática de música e de interfaces para expressão musical, apresentando um conjunto de algoritmos e interfaces de controlo especialmente criados para esta dissertação. Estes algoritmos e interfaces implementam processos inteligentes e musicalmente informados, para gerar eventos musicais sofisticados e corretos musical estilisticamente, de forma automática, a partir de um input simplificado e intuitivo do utilizador, e de forma coerente gerir a experiência de grupo, estabelecendo um controlo integrado sobre os parâmetros globais. A partir destes algoritmos são apresentadas propostas para diferentes aplicações dos conceitos e técnicas, de forma a ilustrar os benefícios e potencial da utilização de um meta-controlo como extensão dos paradigmas existentes para aplicações musicais, assim como potenciar a criação de novos. Estas aplicações abordam principalmente três áreas onde a música mediada por computador pode trazer grandes benefícios, nomeadamente a performance, a criação e a educação. Uma aplicação, PocketBand, implementada no ambiente de programação Max, permite a um grupo de utilizadores tocarem em grupo como uma banda de jazz, quer sejam ou não treinados musicalmente, cada um utilizando um teclado de computador ou um dispositivo iOS multitoque. O segundo protótipo visa a utilização em contextos coletivos e participativos. Trata-se de uma instalação para vários utilizadores, para ecrã multitoque, intitulada MyJazzBand, que permite até quatro utilizadores tocarem juntos como membros de uma banda de jazz virtual. Ambas as aplicações permitem que os utilizadores experienciem e participem de forma eficaz como músicos de jazz, quer sejam ou não músicos profissionais. As aplicações podem ser utilizadas para fins educativos, seja como um sistema de acompanhamento automático em tempo real para qualquer instrumentista ou cantor, seja como uma fonte de informação para procedimentos harmónicos, ou como uma ferramenta prática para criar esboços ou conteúdos para aulas. Irei também demonstrar que esta abordagem reflete uma tendência crescente entre as empresas de software musical comercial, que já começaram a explorar a mediação por computador e algoritmos musicais inteligentes.Abstract : This dissertation focuses on the study and development of computer-mediated interfaces and algorithms for music performance and creation. It is mainly centered on traditional Jazz music accompaniment and explores the meta-control over musical events to potentiate the rich experience of playing jazz by musicians and non-musicians alike, both individually and collectively. It aims to complement existing research on automatic generation of jazz music and new interfaces for musical expression, by presenting a group of specially designed algorithms and control interfaces that implement intelligent, musically informed processes to automatically produce sophisticated and stylistically correct musical events. These algorithms and control interfaces are designed to have a simplified and intuitive input from the user, and to coherently manage group playing by establishing an integrated control over global common parameters. Using these algorithms, two proposals for different applications are presented, in order to illustrate the benefits and potential of this meta-control approach to extend existing paradigms for musical applications, as well as to create new ones. These proposals focus on two main perspectives where computer-mediated music can benefit by using this approach, namely in musical performance and creation, both of which can also be observed from an educational perspective. A core framework, implemented in the Max programming environment, integrates all the functionalities of the instrument algorithms and control strategies, as well as global control, synchronization and communication between all the components. This platform acts as a base, from which different applications can be created. For this dissertation, two main application concepts were developed. The first, PocketBand, has a single-user, one-man-band approach, where a single interface allows a single user to play up to three instruments. This prototype application, for a multi- touch tablet, was the test bed for several experiments with the user interface and playability issues that helped define and improve the mediated interface concept and the instrument algorithms. The second prototype aims the creation of a collective experience. It is a multi-user installation for a multi-touch table, called MyJazzBand, that allows up to four users to play together as members of a virtual jazz band. Both applications allow the users to experience and effectively participate as jazz band musicians, whether they are musically trained or not. The applications can be used for educational purposes, whether as a real-time accompaniment system for any jazz instrument practitioner or singer, as a source of information for harmonic procedures, or as a practical tool for creating quick arrangement drafts or music lesson contents. I will also demonstrate that this approach reflects a growing trend on commercial music software that has already begun to explore and implement mediated interfaces and intelligent music algorithms

    RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement Learning

    Full text link
    This paper presents a deep reinforcement learning algorithm for online accompaniment generation, with potential for real-time interactive human-machine duet improvisation. Different from offline music generation and harmonization, online music accompaniment requires the algorithm to respond to human input and generate the machine counterpart in a sequential order. We cast this as a reinforcement learning problem, where the generation agent learns a policy to generate a musical note (action) based on previously generated context (state). The key of this algorithm is the well-functioning reward model. Instead of defining it using music composition rules, we learn this model from monophonic and polyphonic training data. This model considers the compatibility of the machine-generated note with both the machine-generated context and the human-generated context. Experiments show that this algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part. Subjective evaluations on preferences show that the proposed algorithm generates music pieces of higher quality than the baseline method

    Creative Support Musical Composition System: a study on Multiple Viewpoints Representations in Variable Markov Oracle

    Get PDF
    Em meados do século XX, assistiu-se ao surgimento de uma área de estudo focada na geração au-tomática de conteúdo musical por meios computacionais. Os primeiros exemplos concentram-se no processamento offline de dados musicais mas, recentemente, a comunidade tem vindo a explorar maioritariamente sistemas musicais interativos e em tempo-real. Além disso, uma tendência recente enfatiza a importância da tecnologia assistiva, que promove uma abordagem centrada em escolhas do utilizador, oferecendo várias sugestões para um determinado problema criativo. Nesse contexto, a minha investigação tem como objetivo promover novas ferramentas de software para sistemas de suporte criativo, onde algoritmos podem participar colaborativamente no fluxo de composição. Em maior detalhe, procuro uma ferramenta que aprenda com dados musicais de tamanho variável para fornecer feedback em tempo real durante o processo de composição. À luz das características de multi-dimensionalidade e hierarquia presentes nas estruturas musicais, pretendo estudar as representações que abstraem os seus padrões temporais, para promover a geração de múltiplas soluções ordenadas por grau de optimização para um determinado contexto musical. Por fim, a natureza subjetiva da escolha é dada ao utilizador, ao qual é fornecido um número limitado de soluções 'ideais'. Uma representação simbólica da música manifestada como Modelos sob múltiplos pontos de vista, combinada com o autómato Variable Markov Oracle (VMO), é usada para testar a interação ideal entre a multi-dimensionalidade da representação e a idealidade do modelo VMO, fornecendo soluções coerentes, inovadoras e estilisticamente diversas. Para avaliar o sistema, foram realizados testes para validar a ferramenta num cenário especializado com alunos de composição, usando o modelo de testes do índice de suporte à criatividade.The mid-20th century witnessed the emergence of an area of study that focused on the automatic generation of musical content by computational means. Early examples focus on offline processing of musical data and recently, the community has moved towards interactive online musical systems. Furthermore, a recent trend stresses the importance of assistive technology, which pro-motes a user-in-loop approach by offering multiple suggestions to a given creative problem. In this context, my research aims to foster new software tools for creative support systems, where algorithms can collaboratively participate in the composition flow. In greater detail, I seek a tool that learns from variable-length musical data to provide real-time feedback during the composition process. In light of the multidimensional and hierarchical structure of music, I aim to study the representations which abstract its temporal patterns, to foster the generation of multiple ranked solutions to a given musical context. Ultimately, the subjective nature of the choice is given to the user to which a limited number of 'optimal' solutions are provided. A symbolic music representation manifested as Multiple Viewpoint Models combined with the Variable Markov Oracle (VMO) automaton, are used to test optimal interaction between the multi-dimensionality of the representation with the optimality of the VMO model in providing both style-coherent, novel, and diverse solutions. To evaluate the system, an experiment was conducted to validate the tool in an expert-based scenario with composition students, using the creativity support index test
    corecore