170 research outputs found

    RoboJam: A Musical Mixture Density Network for Collaborative Touchscreen Interaction

    Full text link
    RoboJam is a machine-learning system for generating music that assists users of a touchscreen music app by performing responses to their short improvisations. This system uses a recurrent artificial neural network to generate sequences of touchscreen interactions and absolute timings, rather than high-level musical notes. To accomplish this, RoboJam's network uses a mixture density layer to predict appropriate touch interaction locations in space and time. In this paper, we describe the design and implementation of RoboJam's network and how it has been integrated into a touchscreen music app. A preliminary evaluation analyses the system in terms of training, musical generation and user interaction

    Neural Translation of Musical Style

    Full text link
    Music is an expressive form of communication often used to convey emotion in scenarios where "words are not enough". Part of this information lies in the musical composition where well-defined language exists. However, a significant amount of information is added during a performance as the musician interprets the composition. The performer injects expressiveness into the written score through variations of different musical properties such as dynamics and tempo. In this paper, we describe a model that can learn to perform sheet music. Our research concludes that the generated performances are indistinguishable from a human performance, thereby passing a test in the spirit of a "musical Turing test"

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Music composition based on Artificial Neural Networks

    Get PDF
    In the recent years, research on Artificial Intelligence has ushered in a new phase of technology evolution. Autonomous systems such as voice assistants or self-driving cars are a present reality as first commercial systems have been already launched to the market. New applications emerge each year as huge amounts of generated data and computational capabilities make the development of accurate and expert systems plausible. This evolution is optimizing processes of many core fields such as agriculture, telecommunications or medicine. A quite technological field such as music is also beginning to notice changes, as recommendation engines, synthesizers and music generation are attractive fields of research with some preliminary results. With this project, we intend to contribute to ease the process of music creation making it more accessible to people. The subject of this project is the design, development and experimentation of an AI engine to generate music. A simple, but pleasant to hear artificially generated melody could serve as a base for people to compose more complex pieces of music. At the same time, the project sheds some light to the nuts and bolts of novel techniques for music composition, as the Long Short Term Memory network selected. The system processes MIDI files and extracts relevant information for training the network. The extracted data has been selected by analyzing the main aspects used in the field of Music Information Retrieval. An online listening test taken by subjects of different musical backgrounds is designed to measure the quality of the artificial composer. The final results prove that pleasant to hear melodies have been composed.Ingeniería de Sistemas Audiovisuale

    Comparision Of Adversarial And Non-Adversarial LSTM Music Generative Models

    Full text link
    Algorithmic music composition is a way of composing musical pieces with minimal to no human intervention. While recurrent neural networks are traditionally applied to many sequence-to-sequence prediction tasks, including successful implementations of music composition, their standard supervised learning approach based on input-to-output mapping leads to a lack of note variety. These models can therefore be seen as potentially unsuitable for tasks such as music generation. Generative adversarial networks learn the generative distribution of data and lead to varied samples. This work implements and compares adversarial and non-adversarial training of recurrent neural network music composers on MIDI data. The resulting music samples are evaluated by human listeners, their preferences recorded. The evaluation indicates that adversarial training produces more aesthetically pleasing music.Comment: Submitted to a 2023 conference, 20 pages, 13 figure

    A Survey of AI Music Generation Tools and Models

    Full text link
    In this work, we provide a comprehensive survey of AI music generation tools, including both research projects and commercialized applications. To conduct our analysis, we classified music generation approaches into three categories: parameter-based, text-based, and visual-based classes. Our survey highlights the diverse possibilities and functional features of these tools, which cater to a wide range of users, from regular listeners to professional musicians. We observed that each tool has its own set of advantages and limitations. As a result, we have compiled a comprehensive list of these factors that should be considered during the tool selection process. Moreover, our survey offers critical insights into the underlying mechanisms and challenges of AI music generation
    corecore