657 research outputs found

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining

    Full text link
    Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called "language of audio" (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art or competitive performance against previous approaches. Our code, pretrained model, and demo are available at https://audioldm.github.io/audioldm2.Comment: AudioLDM 2 project page is https://audioldm.github.io/audioldm

    특성 조절이 가능한 심층 신경망 기반의 구조적 멜로디 생성

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 산업공학과, 2021.8. 박종헌.This thesis aims to generate structural melodies using attribute controllable deep neural networks. The development of music-composing artificial intelligence can inspire professional composers and reduce the difficulty of creating and provide the public with the combination and utilization of music and various media content. For a melody generation model to function as a composer, it must control specific desired characteristics. The characteristics include quantifiable attributes, such as pitch level and rhythm density, and chords, which are essential elements that comprise modern popular (pop) music along with melodies. First, this thesis introduces a melody generation model that separately produces rhythm and pitch conditioned on chord progressions. The quantitative evaluation results demonstrate that the melodies produced by the proposed model have a distribution more similar to the dataset than other baseline models. Qualitative analysis reveals the presence of repetition and variation within the generated melodies. Using a subjective human listening test, we conclude that the model successfully produced new melodies that sound pleasant in rhythm and pitch. Four quantifiable attributes are considered: pitch level, pitch variety, rhythm density, and rhythm variety. We improve the previous study of training a variational autoencoder (VAE) and a discriminator in an adversarial manner to eliminate attribute information from the encoded latent variable. Rhythm and pitch VAEs are separately trained to control pitch-and rhythm-related attributes entirely independently. The experimental results indicate that though the ratio of the outputs belonging to the intended bin is not high, the model learned the relative order between the bins. Finally, a hierarchical song structure generation model is proposed. A sequence-to-sequence framework is adopted to capture the similar mood between two parts of the same song. The time axis is compressed by applying attention with different lengths of query and key to model the hierarchy of music. The concept of musical contrast is implemented by controlling attributes with relative bin information. The human evaluation results suggest the possibility of solving the problem of generating different structures of the same song with the sequence-to-sequence framework and reveal that the proposed model can create song structures with musical contrasts.본 논문은 특성 조절이 가능한 심층 신경망을 활용하여 구조적 멜로디를 생성하는 기법을 연구한다. 작곡을 돕는 인공지능의 개발은 전문 작곡가에게는 작곡의 영감을 주어 창작의 고통을 덜 수 있고, 일반 대중에게는 각종 미디어 콘텐츠의 종류와 양이 증가하는 추세에서 필요로 하는 음악을 제공해줌으로 인해 다른 미디어 매체와의 결합 및 활용을 증대할 수 있다. 작곡 인공지능의 수준이 인간 작곡가의 수준에 다다르기 위해서는 의도에 따른 특성 조절 작곡이 가능해야 한다. 여기서 말하는 특성이란 음의 높이나 리듬의 밀도와 같이 수치화 가능한 특성 뿐만 아니라, 멜로디와 함게 음악의 기본 구성 요소라고 볼 수 있는 코드 또한 포함한다. 기존에도 특성 조절이 가능한 음악 생성 모델이 제안되었으나 작곡가가 곡 전체의 구성을 염두에 두고 각 부분을 작곡하듯 긴 범위의 구조적 특징 및 음악적 대조가 고려된 특성 조절에 관한 연구는 많지 않다. 본 논문에서는 먼저 코드 조건부 멜로디 생성에 있어 리듬과 음높이를 각각 따로 생성하는 모델과 그 학습 방법을 제안한다. 정량적 평가의 결과는 제안한 기법이 다른 비교 모델들에 비해 그 생성 결과가 데이터셋과 더 유사한 분포를 나타내고 있음을 보여준다. 정성적 평가 결과 생성된 음악에서 적당한 반복과 변형이 확인되며, 사람이 듣기에 음정과 박자 모두 듣기 좋은 새로운 멜로디를 생성할 수 있다는 결론을 도출한다. 수치화 가능한 특성으로는 음의 높이, 음높이 변화, 리듬의 밀도, 리듬의 복잡도 네 가지 특성을 정의한다. 특성 조절이 가능한 변이형 오토인코더를 학습하기 잠재 변수로부터 특성 정보를 제외하는 판별기를 적대적으로 학습하는 기존 연구를 발전시켜, 음높이와 리듬 관련 특성을 완전히 독립적으로 조절할 수 있도록 두 개의 모델을 분리하여 학습한다. 각 구간마다 동일한 양의 데이터를 포함하도록 특성 값에 따라 구간을 나눈 후 학습한 결과, 생성 결과가 의도한 구간에 정확히 포함되는 비율은 높지 않지만 상관계수는 높게 나타난다. 마지막으로 앞의 두 연구의 결과를 활용하여, 음악적으로 비슷하면서도 서로 대조를 이루는 곡 구조 생성 기법을 제안한다. 시퀀스-투-시퀀스 문제 상황에서 좋은 성능을 보이는 트랜스포머 모델을 베이스라인으로 삼아 어텐션 매커니즘을 적용한다. 음악의 계층적 구조를 반영하기 위해 계층적 어텐션을 적용하며, 이 때 상대적 위치 임베딩을 효율적으로 계산하는 방법을 제시한다. 음악적 대조를 구현하기 위해 앞서 정의한 네 가지 특성 정보를 조절하도록 적대적 학습을 진행하고, 이 때 특성 정보는 정확한 구간 정보가 아닌 상대적 구간 비교 정보를 사용한다. 청취 실험 결과 같은 곡의 다른 구조를 생성하는 문제를 시퀀스-투-시퀀스 방법으로 해결할 수 있는 가능성을 제시하고, 제안된 기법을 통해 음악적 대조가 나타나는 곡 구조 생성이 가능하다는 점을 보여준다.Chapter 1 Introduction 1 1.1 Background and Motivation 1 1.2 Objectives 4 1.3 Thesis Outline 6 Chapter 2 Literature Review 7 2.1 Chord-conditioned Melody Generation 7 2.2 Attention Mechanism and Transformer 10 2.2.1 Attention Mechanism 10 2.2.2 Transformer 10 2.2.3 Relative Positional Embedding 12 2.2.4 Funnel-Transformer 14 2.3 Attribute Controllable Music Generation 16 Chapter 3 Problem Definition 17 3.1 Data Representation 17 3.1.1 Datasets 18 3.1.2 Preprocessing 19 3.2 Notation and Formulas 21 3.2.1 Chord-conditioned Melody Generation 21 3.2.2 Attribute Controllable Melody Generation 22 3.2.3 Song Structure Generation 22 3.2.4 Notation 22 Chapter 4 Chord-conditioned Melody Generation 24 4.1 Methodology 24 4.1.1 Model Architecture 24 4.1.2 Relative Positional Embedding 27 4.2 Training and Generation 29 4.2.1 Two-phase Training 30 4.2.2 Pitch-varied Rhythm Data 30 4.2.3 Generating Melodies 31 4.3 Experiments 32 4.3.1 Experiment Settings 32 4.3.2 Baseline Models 33 4.4 Evaluation Results 34 4.4.1 Quantitative Evaluation 34 4.4.2 Qualitative Evaluation 42 Chapter 5 Attribute Controllable Melody Generation 48 5.1 Attribute Definition 48 5.1.1 Pitch-Related Attributes 48 5.1.2 Rhythm-Related Attributes 49 5.2 Model Architecture 51 5.3 Experiments 54 5.3.1 Data Preprocessing 54 5.3.2 Training 56 5.4 Results 58 5.4.1 Quantitative Results 58 5.4.2 Output Examples 60 Chapter 6 Hierarchical Song Structure Generation 68 6.1 Baseline 69 6.2 Proposed Model 70 6.2.1 Relative Hierarchical Attention 70 6.2.2 Model Architecture 78 6.3 Experiments 84 6.3.1 Training and Generation 84 6.3.2 Human Evaluation 85 6.4 Evaluation Results 86 6.4.1 Control Success Ratio 86 6.4.2 Human Perception Ratio 86 6.4.3 Generated Samples 88 Chapter 7 Conclusion 104 7.1 Summary and Contributions 104 7.2 Limitations and Future Research 107 Appendices 108 Chapter A MGEval Results Between the Music of Different Genres 109 Chapter B MGEval Results of CMT and Baseline Models 116 Chapter C Samples Generated by CMT 126 Bibliography 129 국문초록 144박

    음악적 요소에 대한 조건부 생성의 개선에 관한 연구: 화음과 표현을 중심으로

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 융합과학기술대학원 융합과학부(디지털정보융합전공), 2023. 2. 이교구.Conditional generation of musical components (CGMC) creates a part of music based on partial musical components such as melody or chord. CGMC is beneficial for discovering complex relationships among musical attributes. It can also assist non-experts who face difficulties in making music. However, recent studies for CGMC are still facing two challenges in terms of generation quality and model controllability. First, the structure of the generated music is not robust. Second, only limited ranges of musical factors and tasks have been examined as targets for flexible control of generation. In this thesis, we aim to mitigate these two challenges to improve the CGMC systems. For musical structure, we focus on intuitive modeling of musical hierarchy to help the model explicitly learn musically meaningful dependency. To this end, we utilize alignment paths between the raw music data and the musical units such as notes or chords. For musical creativity, we facilitate smooth control of novel musical attributes using latent representations. We attempt to achieve disentangled representations of the intended factors by regularizing them with data-driven inductive bias. This thesis verifies the proposed approaches particularly in two representative CGMC tasks, melody harmonization and expressive performance rendering. A variety of experimental results show the possibility of the proposed approaches to expand musical creativity under stable generation quality.음악적 요소를 조건부 생성하는 분야인 CGMC는 멜로디나 화음과 같은 음악의 일부분을 기반으로 나머지 부분을 생성하는 것을 목표로 한다. 이 분야는 음악적 요소 간 복잡한 관계를 탐구하는 데 용이하고, 음악을 만드는 데 어려움을 겪는 비전문가들을 도울 수 있다. 최근 연구들은 딥러닝 기술을 활용하여 CGMC 시스템의 성능을 높여왔다. 하지만, 이러한 연구들에는 아직 생성 품질과 제어가능성 측면에서 두 가지의 한계점이 있다. 먼저, 생성된 음악의 음악적 구조가 명확하지 않다. 또한, 아직 좁은 범위의 음악적 요소 및 테스크만이 유연한 제어의 대상으로서 탐구되었다. 이에 본 학위논문에서는 CGMC의 개선을 위해 위 두 가지의 한계점을 해결하고자 한다. 첫 번째로, 음악 구조를 이루는 음악적 위계를 직관적으로 모델링하는 데 집중하고자 한다. 본래 데이터와 음, 화음과 같은 음악적 단위 간 정렬 경로를 사용하여 모델이 음악적으로 의미있는 종속성을 명확하게 배울 수 있도록 한다. 두 번째로, 잠재 표상을 활용하여 새로운 음악적 요소들을 유연하게 제어하고자 한다. 특히 잠재 표상이 의도된 요소에 대해 풀리도록 훈련하기 위해서 비지도 혹은 자가지도 학습 프레임워크을 사용하여 잠재 표상을 제한하도록 한다. 본 학위논문에서는 CGMC 분야의 대표적인 두 테스크인 멜로디 하모나이제이션 및 표현적 연주 렌더링 테스크에 대해 위의 두 가지 방법론을 검증한다. 다양한 실험적 결과들을 통해 제안한 방법론이 CGMC 시스템의 음악적 창의성을 안정적인 생성 품질로 확장할 수 있다는 가능성을 시사한다.Chapter 1 Introduction 1 1.1 Motivation 5 1.2 Definitions 8 1.3 Tasks of Interest 10 1.3.1 Generation Quality 10 1.3.2 Controllability 12 1.4 Approaches 13 1.4.1 Modeling Musical Hierarchy 14 1.4.2 Regularizing Latent Representations 16 1.4.3 Target Tasks 18 1.5 Outline of the Thesis 19 Chapter 2 Background 22 2.1 Music Generation Tasks 23 2.1.1 Melody Harmonization 23 2.1.2 Expressive Performance Rendering 25 2.2 Structure-enhanced Music Generation 27 2.2.1 Hierarchical Music Generation 27 2.2.2 Transformer-based Music Generation 28 2.3 Disentanglement Learning 29 2.3.1 Unsupervised Approaches 30 2.3.2 Supervised Approaches 30 2.3.3 Self-supervised Approaches 31 2.4 Controllable Music Generation 32 2.4.1 Score Generation 32 2.4.2 Performance Rendering 33 2.5 Summary 34 Chapter 3 Translating Melody to Chord: Structured and Flexible Harmonization of Melody with Transformer 36 3.1 Introduction 36 3.2 Proposed Methods 41 3.2.1 Standard Transformer Model (STHarm) 41 3.2.2 Variational Transformer Model (VTHarm) 44 3.2.3 Regularized Variational Transformer Model (rVTHarm) 46 3.2.4 Training Objectives 47 3.3 Experimental Settings 48 3.3.1 Datasets 49 3.3.2 Comparative Methods 50 3.3.3 Training 50 3.3.4 Metrics 51 3.4 Evaluation 56 3.4.1 Chord Coherence and Diversity 57 3.4.2 Harmonic Similarity to Human 59 3.4.3 Controlling Chord Complexity 60 3.4.4 Subjective Evaluation 62 3.4.5 Qualitative Results 67 3.4.6 Ablation Study 73 3.5 Conclusion and Future Work 74 Chapter 4 Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-supervised Learning 76 4.1 Introduction 76 4.2 Proposed Methods 79 4.2.1 Data Representation 79 4.2.2 Modeling Musical Hierarchy 80 4.2.3 Overall Network Architecture 81 4.2.4 Regularizing the Latent Variables 84 4.2.5 Overall Objective 86 4.3 Experimental Settings 87 4.3.1 Dataset and Implementation 87 4.3.2 Comparative Methods 88 4.4 Evaluation 88 4.4.1 Generation Quality 89 4.4.2 Disentangling Latent Representations 90 4.4.3 Controllability of Expressive Attributes 91 4.4.4 KL Divergence 93 4.4.5 Ablation Study 94 4.4.6 Subjective Evaluation 95 4.4.7 Qualitative Examples 97 4.4.8 Extent of Control 100 4.5 Conclusion 102 Chapter 5 Conclusion and Future Work 103 5.1 Conclusion 103 5.2 Future Work 106 5.2.1 Deeper Investigation of Controllable Factors 106 5.2.2 More Analysis of Qualitative Evaluation Results 107 5.2.3 Improving Diversity and Scale of Dataset 108 Bibliography 109 초 록 137박

    Polyphonic music generation using neural networks

    Get PDF
    In this project, the application of generative models for polyphonic music generation is investigated. Polyphonic music generation falls into the field of algorithmic composition, which is a field that aims to develop models to automate, partially or completely, the composition of musical pieces. This process has many challenges both in terms of how to achieve the generation of musical pieces that are enjoyable and also how to perform a robust evaluation of the model to guide improvements. An extensive survey of the development of the field and the state-of-the-art is carried out. From this, two distinct generative models were chosen to apply to the problem of polyphonic music generation. The models chosen were the Restricted Boltzmann Machine and the Generative Adversarial Network. In particular, for the GAN, two architectures were used, the Deep Convolutional GAN and the Wasserstein GAN with gradient penalty. To train these models, a dataset containing over 9000 samples of classical musical pieces was used. Using a piano-roll representation of the musical pieces, these were converted into binary 2D arrays in which the vertical dimensions related to the pitch while the horizontal dimension represented the time, and note events were represented by active units. The first 16 seconds of each piece was extracted and used for training the model after applying data cleansing and preprocessing. Using implementations of these models, samples of musical pieces were generated. Based on listening tests performed by participants, the Deep Convolutional GAN achieved the best scores, with its compositions being ranked on average 4.80 on a scale from 1-5 of how enjoyable the pieces were. To perform a more objective evaluation, different musical features that describe rhythmic and melodic characteristics were extracted from the generated pieces and compared against the training dataset. These features included the implementation of the Krumhansl-Schmuckler algorithm for musical key detection and the average information rate used as an estimator of long-term musical structure. Within each set of the generated musical samples, the pairwise cross-validation using the Euclidean distance between each feature was performed. This was also performed between each set of generated samples and the features extracted from the training data, resulting in two sets of distances, the intra-set and inter-set distances. Using kernel density estimation, the probability density functions of these are obtained. Finally, the Kullback-Liebler divergence between the intra-set and inter-set distance of each feature for each generative model was calculated. The lower divergence indicates that the distributions are more similar. On average, the Restricted Boltzmann Machine obtained the lowest Kullback-Liebler divergences

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010

    Latent variable methods for visualization through time

    Get PDF

    Toward Interactive Music Generation: A Position Paper

    Get PDF
    Music generation using deep learning has received considerable attention in recent years. Researchers have developed various generative models capable of imitating musical conventions, comprehending the musical corpora, and generating new samples based on the learning outcome. Although the samples generated by these models are persuasive, they often lack musical structure and creativity. For instance, a vanilla end-to-end approach, which deals with all levels of music representation at once, does not offer human-level control and interaction during the learning process, leading to constrained results. Indeed, music creation is a recurrent process that follows some principles by a musician, where various musical features are reused or adapted. On the other hand, a musical piece adheres to a musical style, breaking down into precise concepts of timbre style, performance style, composition style, and the coherency between these aspects. Here, we study and analyze the current advances in music generation using deep learning models through different criteria. We discuss the shortcomings and limitations of these models regarding interactivity and adaptability. Finally, we draw the potential future research direction addressing multi-agent systems and reinforcement learning algorithms to alleviate these shortcomings and limitations

    Automatic transcription of polyphonic music exploiting temporal evolution

    Get PDF
    PhDAutomatic music transcription is the process of converting an audio recording into a symbolic representation using musical notation. It has numerous applications in music information retrieval, computational musicology, and the creation of interactive systems. Even for expert musicians, transcribing polyphonic pieces of music is not a trivial task, and while the problem of automatic pitch estimation for monophonic signals is considered to be solved, the creation of an automated system able to transcribe polyphonic music without setting restrictions on the degree of polyphony and the instrument type still remains open. In this thesis, research on automatic transcription is performed by explicitly incorporating information on the temporal evolution of sounds. First efforts address the problem by focusing on signal processing techniques and by proposing audio features utilising temporal characteristics. Techniques for note onset and offset detection are also utilised for improving transcription performance. Subsequent approaches propose transcription models based on shift-invariant probabilistic latent component analysis (SI-PLCA), modeling the temporal evolution of notes in a multiple-instrument case and supporting frequency modulations in produced notes. Datasets and annotations for transcription research have also been created during this work. Proposed systems have been privately as well as publicly evaluated within the Music Information Retrieval Evaluation eXchange (MIREX) framework. Proposed systems have been shown to outperform several state-of-the-art transcription approaches. Developed techniques have also been employed for other tasks related to music technology, such as for key modulation detection, temperament estimation, and automatic piano tutoring. Finally, proposed music transcription models have also been utilized in a wider context, namely for modeling acoustic scenes

    A Survey of AI Music Generation Tools and Models

    Full text link
    In this work, we provide a comprehensive survey of AI music generation tools, including both research projects and commercialized applications. To conduct our analysis, we classified music generation approaches into three categories: parameter-based, text-based, and visual-based classes. Our survey highlights the diverse possibilities and functional features of these tools, which cater to a wide range of users, from regular listeners to professional musicians. We observed that each tool has its own set of advantages and limitations. As a result, we have compiled a comprehensive list of these factors that should be considered during the tool selection process. Moreover, our survey offers critical insights into the underlying mechanisms and challenges of AI music generation
    corecore