8,024 research outputs found
VGM-RNN: Recurrent Neural Networks for Video Game Music Generation
The recent explosion of interest in deep neural networks has affected and in some cases reinvigorated work in fields as diverse as natural language processing, image recognition, speech recognition and many more. For sequence learning tasks, recurrent neural networks and in particular LSTM-based networks have shown promising results. Recently there has been interest – for example in the research by Google’s Magenta team – in applying so-called “language modeling” recurrent neural networks to musical tasks, including for the automatic generation of original music. In this work we demonstrate our own LSTM-based music language modeling recurrent network. We show that it is able to learn musical features from a MIDI dataset and generate output that is musically interesting while demonstrating features of melody, harmony and rhythm. We source our dataset from VGMusic.com, a collection of user-submitted MIDI transcriptions of video game songs, and attempt to generate output which emulates this kind of music
Melody Generation using an Interactive Evolutionary Algorithm
Music generation with the aid of computers has been recently grabbed the
attention of many scientists in the area of artificial intelligence. Deep
learning techniques have evolved sequence production methods for this purpose.
Yet, a challenging problem is how to evaluate generated music by a machine. In
this paper, a methodology has been developed based upon an interactive
evolutionary optimization method, with which the scoring of the generated
melodies is primarily performed by human expertise, during the training. This
music quality scoring is modeled using a Bi-LSTM recurrent neural network.
Moreover, the innovative generated melody through a Genetic algorithm will then
be evaluated using this Bi-LSTM network. The results of this mechanism clearly
show that the proposed method is able to create pleasurable melodies with
desired styles and pieces. This method is also quite fast, compared to the
state-of-the-art data-oriented evolutionary systems.Comment: 5 pages, 4 images, submitted to MEDPRAI2019 conferenc
16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)
The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc
- …