33 research outputs found

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    μŒμ•…μ  μš”μ†Œμ— λŒ€ν•œ 쑰건뢀 μƒμ„±μ˜ κ°œμ„ μ— κ΄€ν•œ 연ꡬ: ν™”μŒκ³Ό ν‘œν˜„μ„ μ€‘μ‹¬μœΌλ‘œ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : μœ΅ν•©κ³Όν•™κΈ°μˆ λŒ€ν•™μ› μœ΅ν•©κ³Όν•™λΆ€(λ””μ§€ν„Έμ •λ³΄μœ΅ν•©μ „κ³΅), 2023. 2. 이ꡐꡬ.Conditional generation of musical components (CGMC) creates a part of music based on partial musical components such as melody or chord. CGMC is beneficial for discovering complex relationships among musical attributes. It can also assist non-experts who face difficulties in making music. However, recent studies for CGMC are still facing two challenges in terms of generation quality and model controllability. First, the structure of the generated music is not robust. Second, only limited ranges of musical factors and tasks have been examined as targets for flexible control of generation. In this thesis, we aim to mitigate these two challenges to improve the CGMC systems. For musical structure, we focus on intuitive modeling of musical hierarchy to help the model explicitly learn musically meaningful dependency. To this end, we utilize alignment paths between the raw music data and the musical units such as notes or chords. For musical creativity, we facilitate smooth control of novel musical attributes using latent representations. We attempt to achieve disentangled representations of the intended factors by regularizing them with data-driven inductive bias. This thesis verifies the proposed approaches particularly in two representative CGMC tasks, melody harmonization and expressive performance rendering. A variety of experimental results show the possibility of the proposed approaches to expand musical creativity under stable generation quality.μŒμ•…μ  μš”μ†Œλ₯Ό 쑰건뢀 μƒμ„±ν•˜λŠ” 뢄야인 CGMCλŠ” λ©œλ‘œλ””λ‚˜ ν™”μŒκ³Ό 같은 μŒμ•…μ˜ 일뢀뢄을 기반으둜 λ‚˜λ¨Έμ§€ 뢀뢄을 μƒμ„±ν•˜λŠ” 것을 λͺ©ν‘œλ‘œ ν•œλ‹€. 이 λΆ„μ•ΌλŠ” μŒμ•…μ  μš”μ†Œ κ°„ λ³΅μž‘ν•œ 관계λ₯Ό νƒκ΅¬ν•˜λŠ” 데 μš©μ΄ν•˜κ³ , μŒμ•…μ„ λ§Œλ“œλŠ” 데 어렀움을 κ²ͺλŠ” 비전문가듀을 λ„μšΈ 수 μžˆλ‹€. 졜근 연ꡬ듀은 λ”₯λŸ¬λ‹ κΈ°μˆ μ„ ν™œμš©ν•˜μ—¬ CGMC μ‹œμŠ€ν…œμ˜ μ„±λŠ₯을 λ†’μ—¬μ™”λ‹€. ν•˜μ§€λ§Œ, μ΄λŸ¬ν•œ μ—°κ΅¬λ“€μ—λŠ” 아직 생성 ν’ˆμ§ˆκ³Ό μ œμ–΄κ°€λŠ₯μ„± μΈ‘λ©΄μ—μ„œ 두 κ°€μ§€μ˜ ν•œκ³„μ μ΄ μžˆλ‹€. λ¨Όμ €, μƒμ„±λœ μŒμ•…μ˜ μŒμ•…μ  ꡬ쑰가 λͺ…ν™•ν•˜μ§€ μ•Šλ‹€. λ˜ν•œ, 아직 쒁은 λ²”μœ„μ˜ μŒμ•…μ  μš”μ†Œ 및 ν…ŒμŠ€ν¬λ§Œμ΄ μœ μ—°ν•œ μ œμ–΄μ˜ λŒ€μƒμœΌλ‘œμ„œ νƒκ΅¬λ˜μ—ˆλ‹€. 이에 λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œλŠ” CGMC의 κ°œμ„ μ„ μœ„ν•΄ μœ„ 두 κ°€μ§€μ˜ ν•œκ³„μ μ„ ν•΄κ²°ν•˜κ³ μž ν•œλ‹€. 첫 번째둜, μŒμ•… ꡬ쑰λ₯Ό μ΄λ£¨λŠ” μŒμ•…μ  μœ„κ³„λ₯Ό μ§κ΄€μ μœΌλ‘œ λͺ¨λΈλ§ν•˜λŠ” 데 μ§‘μ€‘ν•˜κ³ μž ν•œλ‹€. 본래 데이터와 음, ν™”μŒκ³Ό 같은 μŒμ•…μ  λ‹¨μœ„ κ°„ μ •λ ¬ 경둜λ₯Ό μ‚¬μš©ν•˜μ—¬ λͺ¨λΈμ΄ μŒμ•…μ μœΌλ‘œ μ˜λ―ΈμžˆλŠ” 쒅속성을 λͺ…ν™•ν•˜κ²Œ 배울 수 μžˆλ„λ‘ ν•œλ‹€. 두 번째둜, 잠재 ν‘œμƒμ„ ν™œμš©ν•˜μ—¬ μƒˆλ‘œμš΄ μŒμ•…μ  μš”μ†Œλ“€μ„ μœ μ—°ν•˜κ²Œ μ œμ–΄ν•˜κ³ μž ν•œλ‹€. 특히 잠재 ν‘œμƒμ΄ μ˜λ„λœ μš”μ†Œμ— λŒ€ν•΄ 풀리도둝 ν›ˆλ ¨ν•˜κΈ° μœ„ν•΄μ„œ 비지도 ν˜Ήμ€ μžκ°€μ§€λ„ ν•™μŠ΅ ν”„λ ˆμž„μ›Œν¬μ„ μ‚¬μš©ν•˜μ—¬ 잠재 ν‘œμƒμ„ μ œν•œν•˜λ„λ‘ ν•œλ‹€. λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œλŠ” CGMC λΆ„μ•Όμ˜ λŒ€ν‘œμ μΈ 두 ν…ŒμŠ€ν¬μΈ λ©œλ‘œλ”” ν•˜λͺ¨λ‚˜μ΄μ œμ΄μ…˜ 및 ν‘œν˜„μ  μ—°μ£Ό λ Œλ”λ§ ν…ŒμŠ€ν¬μ— λŒ€ν•΄ μœ„μ˜ 두 가지 방법둠을 κ²€μ¦ν•œλ‹€. λ‹€μ–‘ν•œ μ‹€ν—˜μ  결과듀을 톡해 μ œμ•ˆν•œ 방법둠이 CGMC μ‹œμŠ€ν…œμ˜ μŒμ•…μ  μ°½μ˜μ„±μ„ μ•ˆμ •μ μΈ 생성 ν’ˆμ§ˆλ‘œ ν™•μž₯ν•  수 μžˆλ‹€λŠ” κ°€λŠ₯성을 μ‹œμ‚¬ν•œλ‹€.Chapter 1 Introduction 1 1.1 Motivation 5 1.2 Definitions 8 1.3 Tasks of Interest 10 1.3.1 Generation Quality 10 1.3.2 Controllability 12 1.4 Approaches 13 1.4.1 Modeling Musical Hierarchy 14 1.4.2 Regularizing Latent Representations 16 1.4.3 Target Tasks 18 1.5 Outline of the Thesis 19 Chapter 2 Background 22 2.1 Music Generation Tasks 23 2.1.1 Melody Harmonization 23 2.1.2 Expressive Performance Rendering 25 2.2 Structure-enhanced Music Generation 27 2.2.1 Hierarchical Music Generation 27 2.2.2 Transformer-based Music Generation 28 2.3 Disentanglement Learning 29 2.3.1 Unsupervised Approaches 30 2.3.2 Supervised Approaches 30 2.3.3 Self-supervised Approaches 31 2.4 Controllable Music Generation 32 2.4.1 Score Generation 32 2.4.2 Performance Rendering 33 2.5 Summary 34 Chapter 3 Translating Melody to Chord: Structured and Flexible Harmonization of Melody with Transformer 36 3.1 Introduction 36 3.2 Proposed Methods 41 3.2.1 Standard Transformer Model (STHarm) 41 3.2.2 Variational Transformer Model (VTHarm) 44 3.2.3 Regularized Variational Transformer Model (rVTHarm) 46 3.2.4 Training Objectives 47 3.3 Experimental Settings 48 3.3.1 Datasets 49 3.3.2 Comparative Methods 50 3.3.3 Training 50 3.3.4 Metrics 51 3.4 Evaluation 56 3.4.1 Chord Coherence and Diversity 57 3.4.2 Harmonic Similarity to Human 59 3.4.3 Controlling Chord Complexity 60 3.4.4 Subjective Evaluation 62 3.4.5 Qualitative Results 67 3.4.6 Ablation Study 73 3.5 Conclusion and Future Work 74 Chapter 4 Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-supervised Learning 76 4.1 Introduction 76 4.2 Proposed Methods 79 4.2.1 Data Representation 79 4.2.2 Modeling Musical Hierarchy 80 4.2.3 Overall Network Architecture 81 4.2.4 Regularizing the Latent Variables 84 4.2.5 Overall Objective 86 4.3 Experimental Settings 87 4.3.1 Dataset and Implementation 87 4.3.2 Comparative Methods 88 4.4 Evaluation 88 4.4.1 Generation Quality 89 4.4.2 Disentangling Latent Representations 90 4.4.3 Controllability of Expressive Attributes 91 4.4.4 KL Divergence 93 4.4.5 Ablation Study 94 4.4.6 Subjective Evaluation 95 4.4.7 Qualitative Examples 97 4.4.8 Extent of Control 100 4.5 Conclusion 102 Chapter 5 Conclusion and Future Work 103 5.1 Conclusion 103 5.2 Future Work 106 5.2.1 Deeper Investigation of Controllable Factors 106 5.2.2 More Analysis of Qualitative Evaluation Results 107 5.2.3 Improving Diversity and Scale of Dataset 108 Bibliography 109 초 둝 137λ°•

    νŠΉμ„± 쑰절이 κ°€λŠ₯ν•œ 심측 신경망 기반의 ꡬ쑰적 λ©œλ‘œλ”” 생성

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 산업곡학과, 2021.8. λ°•μ’…ν—Œ.This thesis aims to generate structural melodies using attribute controllable deep neural networks. The development of music-composing artificial intelligence can inspire professional composers and reduce the difficulty of creating and provide the public with the combination and utilization of music and various media content. For a melody generation model to function as a composer, it must control specific desired characteristics. The characteristics include quantifiable attributes, such as pitch level and rhythm density, and chords, which are essential elements that comprise modern popular (pop) music along with melodies. First, this thesis introduces a melody generation model that separately produces rhythm and pitch conditioned on chord progressions. The quantitative evaluation results demonstrate that the melodies produced by the proposed model have a distribution more similar to the dataset than other baseline models. Qualitative analysis reveals the presence of repetition and variation within the generated melodies. Using a subjective human listening test, we conclude that the model successfully produced new melodies that sound pleasant in rhythm and pitch. Four quantifiable attributes are considered: pitch level, pitch variety, rhythm density, and rhythm variety. We improve the previous study of training a variational autoencoder (VAE) and a discriminator in an adversarial manner to eliminate attribute information from the encoded latent variable. Rhythm and pitch VAEs are separately trained to control pitch-and rhythm-related attributes entirely independently. The experimental results indicate that though the ratio of the outputs belonging to the intended bin is not high, the model learned the relative order between the bins. Finally, a hierarchical song structure generation model is proposed. A sequence-to-sequence framework is adopted to capture the similar mood between two parts of the same song. The time axis is compressed by applying attention with different lengths of query and key to model the hierarchy of music. The concept of musical contrast is implemented by controlling attributes with relative bin information. The human evaluation results suggest the possibility of solving the problem of generating different structures of the same song with the sequence-to-sequence framework and reveal that the proposed model can create song structures with musical contrasts.λ³Έ 논문은 νŠΉμ„± 쑰절이 κ°€λŠ₯ν•œ 심측 신경망을 ν™œμš©ν•˜μ—¬ ꡬ쑰적 λ©œλ‘œλ””λ₯Ό μƒμ„±ν•˜λŠ” 기법을 μ—°κ΅¬ν•œλ‹€. μž‘κ³‘μ„ λ•λŠ” 인곡지λŠ₯의 κ°œλ°œμ€ μ „λ¬Έ μž‘κ³‘κ°€μ—κ²ŒλŠ” μž‘κ³‘μ˜ μ˜κ°μ„ μ£Όμ–΄ μ°½μž‘μ˜ 고톡을 덜 수 있고, 일반 λŒ€μ€‘μ—κ²ŒλŠ” 각쒅 λ―Έλ””μ–΄ μ½˜ν…μΈ μ˜ μ’…λ₯˜μ™€ 양이 μ¦κ°€ν•˜λŠ” μΆ”μ„Έμ—μ„œ ν•„μš”λ‘œ ν•˜λŠ” μŒμ•…μ„ μ œκ³΅ν•΄μ€ŒμœΌλ‘œ 인해 λ‹€λ₯Έ λ―Έλ””μ–΄ λ§€μ²΄μ™€μ˜ κ²°ν•© 및 ν™œμš©μ„ μ¦λŒ€ν•  수 μžˆλ‹€. μž‘κ³‘ 인곡지λŠ₯의 μˆ˜μ€€μ΄ 인간 μž‘κ³‘κ°€μ˜ μˆ˜μ€€μ— λ‹€λ‹€λ₯΄κΈ° μœ„ν•΄μ„œλŠ” μ˜λ„μ— λ”°λ₯Έ νŠΉμ„± 쑰절 μž‘κ³‘μ΄ κ°€λŠ₯ν•΄μ•Ό ν•œλ‹€. μ—¬κΈ°μ„œ λ§ν•˜λŠ” νŠΉμ„±μ΄λž€ 음의 λ†’μ΄λ‚˜ λ¦¬λ“¬μ˜ 밀도와 같이 μˆ˜μΉ˜ν™” κ°€λŠ₯ν•œ νŠΉμ„± 뿐만 μ•„λ‹ˆλΌ, λ©œλ‘œλ””μ™€ ν•¨κ²Œ μŒμ•…μ˜ κΈ°λ³Έ ꡬ성 μš”μ†ŒλΌκ³  λ³Ό 수 μžˆλŠ” μ½”λ“œ λ˜ν•œ ν¬ν•¨ν•œλ‹€. 기쑴에도 νŠΉμ„± 쑰절이 κ°€λŠ₯ν•œ μŒμ•… 생성 λͺ¨λΈμ΄ μ œμ•ˆλ˜μ—ˆμœΌλ‚˜ μž‘κ³‘κ°€κ°€ 곑 μ „μ²΄μ˜ ꡬ성을 염두에 두고 각 뢀뢄을 μž‘κ³‘ν•˜λ“― κΈ΄ λ²”μœ„μ˜ ꡬ쑰적 νŠΉμ§• 및 μŒμ•…μ  λŒ€μ‘°κ°€ 고렀된 νŠΉμ„± μ‘°μ ˆμ— κ΄€ν•œ μ—°κ΅¬λŠ” λ§Žμ§€ μ•Šλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” λ¨Όμ € μ½”λ“œ 쑰건뢀 λ©œλ‘œλ”” 생성에 μžˆμ–΄ 리듬과 μŒλ†’μ΄λ₯Ό 각각 λ”°λ‘œ μƒμ„±ν•˜λŠ” λͺ¨λΈκ³Ό κ·Έ ν•™μŠ΅ 방법을 μ œμ•ˆν•œλ‹€. μ •λŸ‰μ  ν‰κ°€μ˜ κ²°κ³ΌλŠ” μ œμ•ˆν•œ 기법이 λ‹€λ₯Έ 비ꡐ λͺ¨λΈλ“€μ— λΉ„ν•΄ κ·Έ 생성 κ²°κ³Όκ°€ 데이터셋과 더 μœ μ‚¬ν•œ 뢄포λ₯Ό λ‚˜νƒ€λ‚΄κ³  μžˆμŒμ„ 보여쀀닀. 정성적 평가 κ²°κ³Ό μƒμ„±λœ μŒμ•…μ—μ„œ μ λ‹Ήν•œ 반볡과 λ³€ν˜•μ΄ ν™•μΈλ˜λ©°, μ‚¬λžŒμ΄ 듣기에 μŒμ •κ³Ό λ°•μž λͺ¨λ‘ λ“£κΈ° 쒋은 μƒˆλ‘œμš΄ λ©œλ‘œλ””λ₯Ό 생성할 수 μžˆλ‹€λŠ” 결둠을 λ„μΆœν•œλ‹€. μˆ˜μΉ˜ν™” κ°€λŠ₯ν•œ νŠΉμ„±μœΌλ‘œλŠ” 음의 높이, μŒλ†’μ΄ λ³€ν™”, λ¦¬λ“¬μ˜ 밀도, λ¦¬λ“¬μ˜ λ³΅μž‘λ„ λ„€ 가지 νŠΉμ„±μ„ μ •μ˜ν•œλ‹€. νŠΉμ„± 쑰절이 κ°€λŠ₯ν•œ λ³€μ΄ν˜• μ˜€ν† μΈμ½”λ”λ₯Ό ν•™μŠ΅ν•˜κΈ° 잠재 λ³€μˆ˜λ‘œλΆ€ν„° νŠΉμ„± 정보λ₯Ό μ œμ™Έν•˜λŠ” νŒλ³„κΈ°λ₯Ό μ λŒ€μ μœΌλ‘œ ν•™μŠ΅ν•˜λŠ” κΈ°μ‘΄ 연ꡬλ₯Ό λ°œμ „μ‹œμΌœ, μŒλ†’μ΄μ™€ 리듬 κ΄€λ ¨ νŠΉμ„±μ„ μ™„μ „νžˆ λ…λ¦½μ μœΌλ‘œ μ‘°μ ˆν•  수 μžˆλ„λ‘ 두 개의 λͺ¨λΈμ„ λΆ„λ¦¬ν•˜μ—¬ ν•™μŠ΅ν•œλ‹€. 각 κ΅¬κ°„λ§ˆλ‹€ λ™μΌν•œ μ–‘μ˜ 데이터λ₯Ό ν¬ν•¨ν•˜λ„λ‘ νŠΉμ„± 값에 따라 ꡬ간을 λ‚˜λˆˆ ν›„ ν•™μŠ΅ν•œ κ²°κ³Ό, 생성 κ²°κ³Όκ°€ μ˜λ„ν•œ ꡬ간에 μ •ν™•νžˆ ν¬ν•¨λ˜λŠ” λΉ„μœ¨μ€ 높지 μ•Šμ§€λ§Œ μƒκ΄€κ³„μˆ˜λŠ” λ†’κ²Œ λ‚˜νƒ€λ‚œλ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ μ•žμ˜ 두 μ—°κ΅¬μ˜ κ²°κ³Όλ₯Ό ν™œμš©ν•˜μ—¬, μŒμ•…μ μœΌλ‘œ λΉ„μŠ·ν•˜λ©΄μ„œλ„ μ„œλ‘œ λŒ€μ‘°λ₯Ό μ΄λ£¨λŠ” 곑 ꡬ쑰 생성 기법을 μ œμ•ˆν•œλ‹€. μ‹œν€€μŠ€-투-μ‹œν€€μŠ€ 문제 μƒν™©μ—μ„œ 쒋은 μ„±λŠ₯을 λ³΄μ΄λŠ” 트랜슀포머 λͺ¨λΈμ„ 베이슀라인으둜 μ‚Όμ•„ μ–΄ν…μ…˜ λ§€μ»€λ‹ˆμ¦˜μ„ μ μš©ν•œλ‹€. μŒμ•…μ˜ 계측적 ꡬ쑰λ₯Ό λ°˜μ˜ν•˜κΈ° μœ„ν•΄ 계측적 μ–΄ν…μ…˜μ„ μ μš©ν•˜λ©°, 이 λ•Œ μƒλŒ€μ  μœ„μΉ˜ μž„λ² λ”©μ„ 효율적으둜 κ³„μ‚°ν•˜λŠ” 방법을 μ œμ‹œν•œλ‹€. μŒμ•…μ  λŒ€μ‘°λ₯Ό κ΅¬ν˜„ν•˜κΈ° μœ„ν•΄ μ•žμ„œ μ •μ˜ν•œ λ„€ 가지 νŠΉμ„± 정보λ₯Ό μ‘°μ ˆν•˜λ„λ‘ μ λŒ€μ  ν•™μŠ΅μ„ μ§„ν–‰ν•˜κ³ , 이 λ•Œ νŠΉμ„± μ •λ³΄λŠ” μ •ν™•ν•œ ꡬ간 정보가 μ•„λ‹Œ μƒλŒ€μ  ꡬ간 비ꡐ 정보λ₯Ό μ‚¬μš©ν•œλ‹€. μ²­μ·¨ μ‹€ν—˜ κ²°κ³Ό 같은 곑의 λ‹€λ₯Έ ꡬ쑰λ₯Ό μƒμ„±ν•˜λŠ” 문제λ₯Ό μ‹œν€€μŠ€-투-μ‹œν€€μŠ€ λ°©λ²•μœΌλ‘œ ν•΄κ²°ν•  수 μžˆλŠ” κ°€λŠ₯성을 μ œμ‹œν•˜κ³ , μ œμ•ˆλœ 기법을 톡해 μŒμ•…μ  λŒ€μ‘°κ°€ λ‚˜νƒ€λ‚˜λŠ” 곑 ꡬ쑰 생성이 κ°€λŠ₯ν•˜λ‹€λŠ” 점을 보여쀀닀.Chapter 1 Introduction 1 1.1 Background and Motivation 1 1.2 Objectives 4 1.3 Thesis Outline 6 Chapter 2 Literature Review 7 2.1 Chord-conditioned Melody Generation 7 2.2 Attention Mechanism and Transformer 10 2.2.1 Attention Mechanism 10 2.2.2 Transformer 10 2.2.3 Relative Positional Embedding 12 2.2.4 Funnel-Transformer 14 2.3 Attribute Controllable Music Generation 16 Chapter 3 Problem Definition 17 3.1 Data Representation 17 3.1.1 Datasets 18 3.1.2 Preprocessing 19 3.2 Notation and Formulas 21 3.2.1 Chord-conditioned Melody Generation 21 3.2.2 Attribute Controllable Melody Generation 22 3.2.3 Song Structure Generation 22 3.2.4 Notation 22 Chapter 4 Chord-conditioned Melody Generation 24 4.1 Methodology 24 4.1.1 Model Architecture 24 4.1.2 Relative Positional Embedding 27 4.2 Training and Generation 29 4.2.1 Two-phase Training 30 4.2.2 Pitch-varied Rhythm Data 30 4.2.3 Generating Melodies 31 4.3 Experiments 32 4.3.1 Experiment Settings 32 4.3.2 Baseline Models 33 4.4 Evaluation Results 34 4.4.1 Quantitative Evaluation 34 4.4.2 Qualitative Evaluation 42 Chapter 5 Attribute Controllable Melody Generation 48 5.1 Attribute Definition 48 5.1.1 Pitch-Related Attributes 48 5.1.2 Rhythm-Related Attributes 49 5.2 Model Architecture 51 5.3 Experiments 54 5.3.1 Data Preprocessing 54 5.3.2 Training 56 5.4 Results 58 5.4.1 Quantitative Results 58 5.4.2 Output Examples 60 Chapter 6 Hierarchical Song Structure Generation 68 6.1 Baseline 69 6.2 Proposed Model 70 6.2.1 Relative Hierarchical Attention 70 6.2.2 Model Architecture 78 6.3 Experiments 84 6.3.1 Training and Generation 84 6.3.2 Human Evaluation 85 6.4 Evaluation Results 86 6.4.1 Control Success Ratio 86 6.4.2 Human Perception Ratio 86 6.4.3 Generated Samples 88 Chapter 7 Conclusion 104 7.1 Summary and Contributions 104 7.2 Limitations and Future Research 107 Appendices 108 Chapter A MGEval Results Between the Music of Different Genres 109 Chapter B MGEval Results of CMT and Baseline Models 116 Chapter C Samples Generated by CMT 126 Bibliography 129 ꡭ문초둝 144λ°•

    Rhythm, Chord and Melody Generation for Lead Sheets using Recurrent Neural Networks

    Get PDF
    Music that is generated by recurrent neural networks often lacks a sense of direction and coherence. We therefore propose a two-stage LSTM-based model for lead sheet generation, in which the harmonic and rhythmic templates of the song are produced first, after which, in a second stage, a sequence of melody notes is generated conditioned on these templates. A subjective listening test shows that our approach outperforms the baselines and increases perceived musical coherence.Comment: 8 pages, 2 figures, 3 tables, 2 appendice

    Emotion-Conditioned Melody Harmonization with Hierarchical Variational Autoencoder

    Full text link
    Existing melody harmonization models have made great progress in improving the quality of generated harmonies, but most of them ignored the emotions beneath the music. Meanwhile, the variability of harmonies generated by previous methods is insufficient. To solve these problems, we propose a novel LSTM-based Hierarchical Variational Auto-Encoder (LHVAE) to investigate the influence of emotional conditions on melody harmonization, while improving the quality of generated harmonies and capturing the abundant variability of chord progressions. Specifically, LHVAE incorporates latent variables and emotional conditions at different levels (piece- and bar-level) to model the global and local music properties. Additionally, we introduce an attention-based melody context vector at each step to better learn the correspondence between melodies and harmonies. Experimental results of the objective evaluation show that our proposed model outperforms other LSTM-based models. Through subjective evaluation, we conclude that only altering the chords hardly changes the overall emotion of the music. The qualitative analysis demonstrates the ability of our model to generate variable harmonies.Comment: Accepted by IEEE SMC 202

    Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder

    Full text link
    Music accompaniment generation is a crucial aspect in the composition process. Deep neural networks have made significant strides in this field, but it remains a challenge for AI to effectively incorporate human emotions to create beautiful accompaniments. Existing models struggle to effectively characterize human emotions within neural network models while composing music. To address this issue, we propose the use of an easy-to-represent emotion flow model, the Valence/Arousal Curve, which allows for the compatibility of emotional information within the model through data transformation and enhances interpretability of emotional factors by utilizing a Variational Autoencoder as the model structure. Further, we used relative self-attention to maintain the structure of the music at music phrase level and to generate a richer accompaniment when combined with the rules of music theory.Comment: Accepted By International Joint Conference on Neural Networks 2023(IJCNN2023
    corecore