20,924 research outputs found
Comparing Probabilistic Models for Melodic Sequences
Modelling the real world complexity of music is a challenge for machine
learning. We address the task of modeling melodic sequences from the same music
genre. We perform a comparative analysis of two probabilistic models; a
Dirichlet Variable Length Markov Model (Dirichlet-VMM) and a Time Convolutional
Restricted Boltzmann Machine (TC-RBM). We show that the TC-RBM learns
descriptive music features, such as underlying chords and typical melody
transitions and dynamics. We assess the models for future prediction and
compare their performance to a VMM, which is the current state of the art in
melody generation. We show that both models perform significantly better than
the VMM, with the Dirichlet-VMM marginally outperforming the TC-RBM. Finally,
we evaluate the short order statistics of the models, using the
Kullback-Leibler divergence between test sequences and model samples, and show
that our proposed methods match the statistics of the music genre significantly
better than the VMM.Comment: in Proceedings of the ECML-PKDD 2011. Lecture Notes in Computer
Science, vol. 6913, pp. 289-304. Springer (2011
MIDI-VAE: Modeling Dynamics and Instrumentation of Music with Applications to Style Transfer
We introduce MIDI-VAE, a neural network model based on Variational
Autoencoders that is capable of handling polyphonic music with multiple
instrument tracks, as well as modeling the dynamics of music by incorporating
note durations and velocities. We show that MIDI-VAE can perform style transfer
on symbolic music by automatically changing pitches, dynamics and instruments
of a music piece from, e.g., a Classical to a Jazz style. We evaluate the
efficacy of the style transfer by training separate style validation
classifiers. Our model can also interpolate between short pieces of music,
produce medleys and create mixtures of entire songs. The interpolations
smoothly change pitches, dynamics and instrumentation to create a harmonic
bridge between two music pieces. To the best of our knowledge, this work
represents the first successful attempt at applying neural style transfer to
complete musical compositions.Comment: Paper accepted at the 19th International Society for Music
Information Retrieval Conference, ISMIR 2018, Paris, Franc
A fuzzy rule model for high level musical features on automated composition systems
Algorithmic composition systems are now well-understood. However, when they are used for specific tasks like creating material for a part of a piece, it is common to prefer, from all of its possible outputs, those exhibiting specific properties. Even though the number of valid outputs is huge, many times the selection is performed manually, either using expertise in the algorithmic model, by means of sampling techniques, or some times even by chance. Automations of this process have been done traditionally by using machine learning techniques. However, whether or not these techniques are really capable of capturing the human rationality, through which the selection is done, to a great degree remains as an open question. The present work discusses a possible approach, that combines expert’s opinion and a fuzzy methodology for rule extraction, to model high level features. An early implementation able to explore the universe of outputs of a particular algorithm by means of the extracted rules is discussed. The rules search for objects similar to those having a desired and pre-identified feature. In this sense, the model can be seen as a finder of objects with specific properties.Peer ReviewedPostprint (author's final draft
Interactive Music Generation with Positional Constraints using Anticipation-RNNs
Recurrent Neural Networks (RNNS) are now widely used on sequence generation
tasks due to their ability to learn long-range dependencies and to generate
sequences of arbitrary length. However, their left-to-right generation
procedure only allows a limited control from a potential user which makes them
unsuitable for interactive and creative usages such as interactive music
generation. This paper introduces a novel architecture called Anticipation-RNN
which possesses the assets of the RNN-based generative models while allowing to
enforce user-defined positional constraints. We demonstrate its efficiency on
the task of generating melodies satisfying positional constraints in the style
of the soprano parts of the J.S. Bach chorale harmonizations. Sampling using
the Anticipation-RNN is of the same order of complexity than sampling from the
traditional RNN model. This fast and interactive generation of musical
sequences opens ways to devise real-time systems that could be used for
creative purposes.Comment: 9 pages, 7 figure
- …