We demonstrate two generative models created by training
a recurrent neural network (RNN) with three hidden
layers of long short-term memory (LSTM) units. This extends
past work in numerous directions, including training
deeper models with nearly 24,000 high-level transcriptions
of folk tunes. We discuss our on-going work