1 research outputs found
Coupled Recurrent Models for Polyphonic Music Composition
This paper introduces a novel recurrent model for music composition that is
tailored to the structure of polyphonic music. We propose an efficient new
conditional probabilistic factorization of musical scores, viewing a score as a
collection of concurrent, coupled sequences: i.e. voices. To model the
conditional distributions, we borrow ideas from both convolutional and
recurrent neural models; we argue that these ideas are natural for capturing
music's pitch invariances, temporal structure, and polyphony. We train models
for single-voice and multi-voice composition on 2,300 scores from the
KernScores dataset.Comment: 13 pages; long version of the paper appearing in ISMIR 201