7 research outputs found
Rhythm, Chord and Melody Generation for Lead Sheets using Recurrent Neural Networks
Music that is generated by recurrent neural networks often lacks a sense of
direction and coherence. We therefore propose a two-stage LSTM-based model for
lead sheet generation, in which the harmonic and rhythmic templates of the song
are produced first, after which, in a second stage, a sequence of melody notes
is generated conditioned on these templates. A subjective listening test shows
that our approach outperforms the baselines and increases perceived musical
coherence.Comment: 8 pages, 2 figures, 3 tables, 2 appendice
A Survey of Music Generation in the Context of Interaction
In recent years, machine learning, and in particular generative adversarial
neural networks (GANs) and attention-based neural networks (transformers), have
been successfully used to compose and generate music, both melodies and
polyphonic pieces. Current research focuses foremost on style replication (eg.
generating a Bach-style chorale) or style transfer (eg. classical to jazz)
based on large amounts of recorded or transcribed music, which in turn also
allows for fairly straight-forward "performance" evaluation. However, most of
these models are not suitable for human-machine co-creation through live
interaction, neither is clear, how such models and resulting creations would be
evaluated. This article presents a thorough review of music representation,
feature analysis, heuristic algorithms, statistical and parametric modelling,
and human and automatic evaluation measures, along with a discussion of which
approaches and models seem most suitable for live interaction