2,139 research outputs found

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Towards a style-specific basis for computational beat tracking

    Get PDF
    Outlined in this paper are a number of sources of evidence, from psychological, ethnomusicological and engineering grounds, to suggest that current approaches to computational beat tracking are incomplete. It is contended that the degree to which cultural knowledge, that is, the specifics of style and associated learnt representational schema, underlie the human faculty of beat tracking has been severely underestimated. Difficulties in building general beat tracking solutions, which can provide both period and phase locking across a large corpus of styles, are highlighted. It is probable that no universal beat tracking model exists which does not utilise a switching model to recognise style and context prior to application

    A Survey of Music Generation in the Context of Interaction

    Full text link
    In recent years, machine learning, and in particular generative adversarial neural networks (GANs) and attention-based neural networks (transformers), have been successfully used to compose and generate music, both melodies and polyphonic pieces. Current research focuses foremost on style replication (eg. generating a Bach-style chorale) or style transfer (eg. classical to jazz) based on large amounts of recorded or transcribed music, which in turn also allows for fairly straight-forward "performance" evaluation. However, most of these models are not suitable for human-machine co-creation through live interaction, neither is clear, how such models and resulting creations would be evaluated. This article presents a thorough review of music representation, feature analysis, heuristic algorithms, statistical and parametric modelling, and human and automatic evaluation measures, along with a discussion of which approaches and models seem most suitable for live interaction

    Cooperation Between Top-Down and Low-Level Markov Chains for Generating Rock Drumming

    Get PDF
    Without heavy modification, the Markov chain is insufficient to handle the task of generating rock drum parts. This paper proposes a system for generating rock drumming that involves the cooperation between a top - down Markov chain that determi nes the structure of created drum parts and a low - level Markov chain that determines their contents. The goal of this system is to generate verse - or chorus - length drum parts that sound reminiscent of the drumming on its input pieces

    The Techno-Soma-Aesthetics of a Dance for the iPhone

    Get PDF
    UID/PAM/00417/2019On the course of intensive research a corpus of artworks that instantiate dance performance in cyberspace have been inspected in order to understand how expert-practitioners used new technologies for production as well as the new means of public dissemination that they enabled. This paper is dedicated to Soi Moi, which was made for the IPhone in 2009 by n+n corsino using motion capture, synthesized environments and multi-sensorial human-computer interaction. I bare the commitment of an expert-spectator that demonstrates the value of this research-practice to understand and inform creative process, technological development, aesthetic experience and scholar debate. This enquiry pursued a constructivist analysis of components and attributes that revealed the ‘remediation’ of disciplinary traditions. But intersecting close examination with a contextualizing project and interpretative layers generated a productive dialogue with theoretical perspectives about the arts, media and cyberculture of the 21st century. It shall be evident why securing a place for this artwork in the history of new media art and performance is a relevant contribution to knowledge. Despite solid proof that performance experts have provided computer technology and information society with pioneering discourse, their practices have a marginal position in the new media art sector and market. Retrieving research results is paramount. Like ephemeral live dance and performance artworks have succumbed to time, the spectre of redundancy hovers Soi Moi because the state-of-the-art technology in use is already outdated.publishersversionpublishe
    corecore