8,694 research outputs found
Generation of folk song melodies using Bayes transforms
The paper introduces the `Bayes transform', a mathematical procedure for putting data into a hierarchical representation. Applicable to any type of data, the procedure yields interesting results when applied to sequences. In this case, the representation obtained implicitly models the repetition hierarchy of the source. There are then natural applications to music. Derivation of Bayes transforms can be the means of determining the repetition hierarchy of note sequences (melodies) in an empirical and domain-general way. The paper investigates application of this approach to Folk Song, examining the results that can be obtained by treating such transforms as generative models
VGM-RNN: Recurrent Neural Networks for Video Game Music Generation
The recent explosion of interest in deep neural networks has affected and in some cases reinvigorated work in fields as diverse as natural language processing, image recognition, speech recognition and many more. For sequence learning tasks, recurrent neural networks and in particular LSTM-based networks have shown promising results. Recently there has been interest – for example in the research by Google’s Magenta team – in applying so-called “language modeling” recurrent neural networks to musical tasks, including for the automatic generation of original music. In this work we demonstrate our own LSTM-based music language modeling recurrent network. We show that it is able to learn musical features from a MIDI dataset and generate output that is musically interesting while demonstrating features of melody, harmony and rhythm. We source our dataset from VGMusic.com, a collection of user-submitted MIDI transcriptions of video game songs, and attempt to generate output which emulates this kind of music
Piano Genie
We present Piano Genie, an intelligent controller which allows non-musicians
to improvise on the piano. With Piano Genie, a user performs on a simple
interface with eight buttons, and their performance is decoded into the space
of plausible piano music in real time. To learn a suitable mapping procedure
for this problem, we train recurrent neural network autoencoders with discrete
bottlenecks: an encoder learns an appropriate sequence of buttons corresponding
to a piano piece, and a decoder learns to map this sequence back to the
original piece. During performance, we substitute a user's input for the
encoder output, and play the decoder's prediction each time the user presses a
button. To improve the intuitiveness of Piano Genie's performance behavior, we
impose musically meaningful constraints over the encoder's outputs.Comment: Published as a conference paper at ACM IUI 201
A Planning-based Approach for Music Composition
. Automatic music composition is a fascinating field within computational
creativity. While different Artificial Intelligence techniques have been used
for tackling this task, Planning – an approach for solving complex combinatorial
problems which can count on a large number of high-performance systems and
an expressive language for describing problems – has never been exploited.
In this paper, we propose two different techniques that rely on automated planning
for generating musical structures. The structures are then filled from the bottom
with “raw” musical materials, and turned into melodies. Music experts evaluated
the creative output of the system, acknowledging an overall human-enjoyable
trait of the melodies produced, which showed a solid hierarchical structure and a
strong musical directionality. The techniques proposed not only have high relevance
for the musical domain, but also suggest unexplored ways of using planning
for dealing with non-deterministic creative domains
- …
