11 research outputs found

    Continuous Melody Generation via Disentangled Short-Term Representations and Structural Conditions

    Full text link
    Automatic music generation is an interdisciplinary research topic that combines computational creativity and semantic analysis of music to create automatic machine improvisations. An important property of such a system is allowing the user to specify conditions and desired properties of the generated music. In this paper we designed a model for composing melodies given a user specified symbolic scenario combined with a previous music context. We add manual labeled vectors denoting external music quality in terms of chord function that provides a low dimensional representation of the harmonic tension and resolution. Our model is capable of generating long melodies by regarding 8-beat note sequences as basic units, and shares consistent rhythm pattern structure with another specific song. The model contains two stages and requires separate training where the first stage adopts a Conditional Variational Autoencoder (C-VAE) to build a bijection between note sequences and their latent representations, and the second stage adopts long short-term memory networks (LSTM) with structural conditions to continue writing future melodies. We further exploit the disentanglement technique via C-VAE to allow melody generation based on pitch contour information separately from conditioning on rhythm patterns. Finally, we evaluate the proposed model using quantitative analysis of rhythm and the subjective listening study. Results show that the music generated by our model tends to have salient repetition structures, rich motives, and stable rhythm patterns. The ability to generate longer and more structural phrases from disentangled representations combined with semantic scenario specification conditions shows a broad application of our model.Comment: 9 pages, 12 figures, 4 tables. in 14th international conference on semantic computing, ICSC 202

    NBP 2.0: Updated Next Bar Predictor, an Improved Algorithmic Music Generator

    Get PDF
    Deep neural network advancements have enabled machines to produce melodies emulating human-composed music. However, the implementation of such machines is costly in terms of resources. In this paper, we present NBP 2.0, a refinement of the previous model next bar predictor (NBP) with two notable improvements: first, transforming each training instance to anchor all the notes to its musical scale, and second, changing the model architecture itself. NBP 2.0 maintained its straightforward and lightweight implementation, which is an advantage over the baseline models. Improvements were assessed using quantitative and qualitative metrics and, based on the results, the improvements from these changes made are notable

    Using Incongruous Genres to Explore Music Making with AI Generated Content

    Get PDF
    Deep learning generative AI models trained on huge datasets are capable of producing complex and high quality music. However, there are few studies of how AI Generated Content (AIGC) is actually used or appropriated in creative practice. We present two first-person accounts by musician-researchers of explorations of an interactive generative AI system trained on Irish Folk music. The AI is intentionally used by musicians from incongruous genres of Punk and Glitch to explore questions of how the model is appropriated into creative practice and how it changes creative practice when used outside of its intended genre. Reflections on the first-person accounts highlight issues of control, ambiguity, trust, and filtering AIGC. The accounts also highlight the role of AI as an audience and critic and how the musicians’ practice changed in response to the AIGC. We suggest that our incongruous approach may help to foreground the creative work and frictions in human-AI creative practice
    corecore