2,811 research outputs found

    Improving Automatic Jazz Melody Generation by Transfer Learning Techniques

    Full text link
    In this paper, we tackle the problem of transfer learning for Jazz automatic generation. Jazz is one of representative types of music, but the lack of Jazz data in the MIDI format hinders the construction of a generative model for Jazz. Transfer learning is an approach aiming to solve the problem of data insufficiency, so as to transfer the common feature from one domain to another. In view of its success in other machine learning problems, we investigate whether, and how much, it can help improve automatic music generation for under-resourced musical genres. Specifically, we use a recurrent variational autoencoder as the generative model, and use a genre-unspecified dataset as the source dataset and a Jazz-only dataset as the target dataset. Two transfer learning methods are evaluated using six levels of source-to-target data ratios. The first method is to train the model on the source dataset, and then fine-tune the resulting model parameters on the target dataset. The second method is to train the model on both the source and target datasets at the same time, but add genre labels to the latent vectors and use a genre classifier to improve Jazz generation. The evaluation results show that the second method seems to perform better overall, but it cannot take full advantage of the genre-unspecified dataset.Comment: 8 pages, Accepted to APSIPA ASC(Asia-Pacific Signal and Information Processing Association Annual Summit and Conference ) 201

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Toward Interactive Music Generation: A Position Paper

    Get PDF
    Music generation using deep learning has received considerable attention in recent years. Researchers have developed various generative models capable of imitating musical conventions, comprehending the musical corpora, and generating new samples based on the learning outcome. Although the samples generated by these models are persuasive, they often lack musical structure and creativity. For instance, a vanilla end-to-end approach, which deals with all levels of music representation at once, does not offer human-level control and interaction during the learning process, leading to constrained results. Indeed, music creation is a recurrent process that follows some principles by a musician, where various musical features are reused or adapted. On the other hand, a musical piece adheres to a musical style, breaking down into precise concepts of timbre style, performance style, composition style, and the coherency between these aspects. Here, we study and analyze the current advances in music generation using deep learning models through different criteria. We discuss the shortcomings and limitations of these models regarding interactivity and adaptability. Finally, we draw the potential future research direction addressing multi-agent systems and reinforcement learning algorithms to alleviate these shortcomings and limitations

    μŒμ•…μ  μš”μ†Œμ— λŒ€ν•œ 쑰건뢀 μƒμ„±μ˜ κ°œμ„ μ— κ΄€ν•œ 연ꡬ: ν™”μŒκ³Ό ν‘œν˜„μ„ μ€‘μ‹¬μœΌλ‘œ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : μœ΅ν•©κ³Όν•™κΈ°μˆ λŒ€ν•™μ› μœ΅ν•©κ³Όν•™λΆ€(λ””μ§€ν„Έμ •λ³΄μœ΅ν•©μ „κ³΅), 2023. 2. 이ꡐꡬ.Conditional generation of musical components (CGMC) creates a part of music based on partial musical components such as melody or chord. CGMC is beneficial for discovering complex relationships among musical attributes. It can also assist non-experts who face difficulties in making music. However, recent studies for CGMC are still facing two challenges in terms of generation quality and model controllability. First, the structure of the generated music is not robust. Second, only limited ranges of musical factors and tasks have been examined as targets for flexible control of generation. In this thesis, we aim to mitigate these two challenges to improve the CGMC systems. For musical structure, we focus on intuitive modeling of musical hierarchy to help the model explicitly learn musically meaningful dependency. To this end, we utilize alignment paths between the raw music data and the musical units such as notes or chords. For musical creativity, we facilitate smooth control of novel musical attributes using latent representations. We attempt to achieve disentangled representations of the intended factors by regularizing them with data-driven inductive bias. This thesis verifies the proposed approaches particularly in two representative CGMC tasks, melody harmonization and expressive performance rendering. A variety of experimental results show the possibility of the proposed approaches to expand musical creativity under stable generation quality.μŒμ•…μ  μš”μ†Œλ₯Ό 쑰건뢀 μƒμ„±ν•˜λŠ” 뢄야인 CGMCλŠ” λ©œλ‘œλ””λ‚˜ ν™”μŒκ³Ό 같은 μŒμ•…μ˜ 일뢀뢄을 기반으둜 λ‚˜λ¨Έμ§€ 뢀뢄을 μƒμ„±ν•˜λŠ” 것을 λͺ©ν‘œλ‘œ ν•œλ‹€. 이 λΆ„μ•ΌλŠ” μŒμ•…μ  μš”μ†Œ κ°„ λ³΅μž‘ν•œ 관계λ₯Ό νƒκ΅¬ν•˜λŠ” 데 μš©μ΄ν•˜κ³ , μŒμ•…μ„ λ§Œλ“œλŠ” 데 어렀움을 κ²ͺλŠ” 비전문가듀을 λ„μšΈ 수 μžˆλ‹€. 졜근 연ꡬ듀은 λ”₯λŸ¬λ‹ κΈ°μˆ μ„ ν™œμš©ν•˜μ—¬ CGMC μ‹œμŠ€ν…œμ˜ μ„±λŠ₯을 λ†’μ—¬μ™”λ‹€. ν•˜μ§€λ§Œ, μ΄λŸ¬ν•œ μ—°κ΅¬λ“€μ—λŠ” 아직 생성 ν’ˆμ§ˆκ³Ό μ œμ–΄κ°€λŠ₯μ„± μΈ‘λ©΄μ—μ„œ 두 κ°€μ§€μ˜ ν•œκ³„μ μ΄ μžˆλ‹€. λ¨Όμ €, μƒμ„±λœ μŒμ•…μ˜ μŒμ•…μ  ꡬ쑰가 λͺ…ν™•ν•˜μ§€ μ•Šλ‹€. λ˜ν•œ, 아직 쒁은 λ²”μœ„μ˜ μŒμ•…μ  μš”μ†Œ 및 ν…ŒμŠ€ν¬λ§Œμ΄ μœ μ—°ν•œ μ œμ–΄μ˜ λŒ€μƒμœΌλ‘œμ„œ νƒκ΅¬λ˜μ—ˆλ‹€. 이에 λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œλŠ” CGMC의 κ°œμ„ μ„ μœ„ν•΄ μœ„ 두 κ°€μ§€μ˜ ν•œκ³„μ μ„ ν•΄κ²°ν•˜κ³ μž ν•œλ‹€. 첫 번째둜, μŒμ•… ꡬ쑰λ₯Ό μ΄λ£¨λŠ” μŒμ•…μ  μœ„κ³„λ₯Ό μ§κ΄€μ μœΌλ‘œ λͺ¨λΈλ§ν•˜λŠ” 데 μ§‘μ€‘ν•˜κ³ μž ν•œλ‹€. 본래 데이터와 음, ν™”μŒκ³Ό 같은 μŒμ•…μ  λ‹¨μœ„ κ°„ μ •λ ¬ 경둜λ₯Ό μ‚¬μš©ν•˜μ—¬ λͺ¨λΈμ΄ μŒμ•…μ μœΌλ‘œ μ˜λ―ΈμžˆλŠ” 쒅속성을 λͺ…ν™•ν•˜κ²Œ 배울 수 μžˆλ„λ‘ ν•œλ‹€. 두 번째둜, 잠재 ν‘œμƒμ„ ν™œμš©ν•˜μ—¬ μƒˆλ‘œμš΄ μŒμ•…μ  μš”μ†Œλ“€μ„ μœ μ—°ν•˜κ²Œ μ œμ–΄ν•˜κ³ μž ν•œλ‹€. 특히 잠재 ν‘œμƒμ΄ μ˜λ„λœ μš”μ†Œμ— λŒ€ν•΄ 풀리도둝 ν›ˆλ ¨ν•˜κΈ° μœ„ν•΄μ„œ 비지도 ν˜Ήμ€ μžκ°€μ§€λ„ ν•™μŠ΅ ν”„λ ˆμž„μ›Œν¬μ„ μ‚¬μš©ν•˜μ—¬ 잠재 ν‘œμƒμ„ μ œν•œν•˜λ„λ‘ ν•œλ‹€. λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œλŠ” CGMC λΆ„μ•Όμ˜ λŒ€ν‘œμ μΈ 두 ν…ŒμŠ€ν¬μΈ λ©œλ‘œλ”” ν•˜λͺ¨λ‚˜μ΄μ œμ΄μ…˜ 및 ν‘œν˜„μ  μ—°μ£Ό λ Œλ”λ§ ν…ŒμŠ€ν¬μ— λŒ€ν•΄ μœ„μ˜ 두 가지 방법둠을 κ²€μ¦ν•œλ‹€. λ‹€μ–‘ν•œ μ‹€ν—˜μ  결과듀을 톡해 μ œμ•ˆν•œ 방법둠이 CGMC μ‹œμŠ€ν…œμ˜ μŒμ•…μ  μ°½μ˜μ„±μ„ μ•ˆμ •μ μΈ 생성 ν’ˆμ§ˆλ‘œ ν™•μž₯ν•  수 μžˆλ‹€λŠ” κ°€λŠ₯성을 μ‹œμ‚¬ν•œλ‹€.Chapter 1 Introduction 1 1.1 Motivation 5 1.2 Definitions 8 1.3 Tasks of Interest 10 1.3.1 Generation Quality 10 1.3.2 Controllability 12 1.4 Approaches 13 1.4.1 Modeling Musical Hierarchy 14 1.4.2 Regularizing Latent Representations 16 1.4.3 Target Tasks 18 1.5 Outline of the Thesis 19 Chapter 2 Background 22 2.1 Music Generation Tasks 23 2.1.1 Melody Harmonization 23 2.1.2 Expressive Performance Rendering 25 2.2 Structure-enhanced Music Generation 27 2.2.1 Hierarchical Music Generation 27 2.2.2 Transformer-based Music Generation 28 2.3 Disentanglement Learning 29 2.3.1 Unsupervised Approaches 30 2.3.2 Supervised Approaches 30 2.3.3 Self-supervised Approaches 31 2.4 Controllable Music Generation 32 2.4.1 Score Generation 32 2.4.2 Performance Rendering 33 2.5 Summary 34 Chapter 3 Translating Melody to Chord: Structured and Flexible Harmonization of Melody with Transformer 36 3.1 Introduction 36 3.2 Proposed Methods 41 3.2.1 Standard Transformer Model (STHarm) 41 3.2.2 Variational Transformer Model (VTHarm) 44 3.2.3 Regularized Variational Transformer Model (rVTHarm) 46 3.2.4 Training Objectives 47 3.3 Experimental Settings 48 3.3.1 Datasets 49 3.3.2 Comparative Methods 50 3.3.3 Training 50 3.3.4 Metrics 51 3.4 Evaluation 56 3.4.1 Chord Coherence and Diversity 57 3.4.2 Harmonic Similarity to Human 59 3.4.3 Controlling Chord Complexity 60 3.4.4 Subjective Evaluation 62 3.4.5 Qualitative Results 67 3.4.6 Ablation Study 73 3.5 Conclusion and Future Work 74 Chapter 4 Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-supervised Learning 76 4.1 Introduction 76 4.2 Proposed Methods 79 4.2.1 Data Representation 79 4.2.2 Modeling Musical Hierarchy 80 4.2.3 Overall Network Architecture 81 4.2.4 Regularizing the Latent Variables 84 4.2.5 Overall Objective 86 4.3 Experimental Settings 87 4.3.1 Dataset and Implementation 87 4.3.2 Comparative Methods 88 4.4 Evaluation 88 4.4.1 Generation Quality 89 4.4.2 Disentangling Latent Representations 90 4.4.3 Controllability of Expressive Attributes 91 4.4.4 KL Divergence 93 4.4.5 Ablation Study 94 4.4.6 Subjective Evaluation 95 4.4.7 Qualitative Examples 97 4.4.8 Extent of Control 100 4.5 Conclusion 102 Chapter 5 Conclusion and Future Work 103 5.1 Conclusion 103 5.2 Future Work 106 5.2.1 Deeper Investigation of Controllable Factors 106 5.2.2 More Analysis of Qualitative Evaluation Results 107 5.2.3 Improving Diversity and Scale of Dataset 108 Bibliography 109 초 둝 137λ°•

    Model Konversi Notasi Kepatihan ke dalam Format MIDI untuk Pembangkitan Musik Barat Orisinal

    Get PDF
    Musik orisinal merupakan komposisi baru yang diciptakan dengan memodifikasi elemen-elemen musik menggunakan metode yang belum pernah dilakukan sebelumnya. Penelitian ini bertujuan untuk mengembangkan sistem pembangkitan musik Barat orisinal yang diukur berdasarkan pola urutan nada dan distribusinya yang diakuisisi dari musik Gamelan, musik tradisional dari Jawa. Lembar musik Gamelan yang digunakan sebagai sumber data dikonversi ke dalam format MIDI untuk dijadikan input bagi pelatihan jaringan LSTM berdasarkan informasi nada, langkah dan durasi. Selanjutnya, teknik sequence prediction digunakan untuk membangkitkan output nada berdasarkan input nada sebelumnya. Hasil pembangkitan musik Barat orisinal berupa data dalam format file MIDI dan visualisasinya dalam format notasi Balok. Evaluasi pada pelatihan jaringan LSTM menunjukkan hasil yang baik dengan tingkat loss sebesar 0,1. Evaluasi tingkat kemiripan pola urutan nada dan distribusinya dilakukan menggunakan grafik distribusi sampel nada, langkah dan durasi, dan hasilnya menunjukkan tingkat kemiripan yang baik

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc

    A systematic review of artificial intelligence-based music generation: Scope, applications, and future trends

    Get PDF
    Currently available reviews in the area of artificial intelligence-based music generation do not provide a wide range of publications and are usually centered around comparing very specific topics between a very limited range of solutions. Best surveys available in the field are bibliography sections of some papers and books which lack a systematic approach and limit their scope to only handpicked examples In this work, we analyze the scope and trends of the research on artificial intelligence-based music generation by performing a systematic review of the available publications in the field using the Prisma methodology. Furthermore, we discuss the possible implementations and accessibility of a set of currently available AI solutions, as aids to musical composition. Our research shows how publications are being distributed globally according to many characteristics, which provides a clear picture of the situation of this technology. Through our research it becomes clear that the interest of both musicians and computer scientists in AI-based automatic music generation has increased significantly in the last few years with an increasing participation of mayor companies in the field whose works we analyze. We discuss several generation architectures, both from a technical and a musical point of view and we highlight various areas were further research is needed
    • …
    corecore