832 research outputs found
SCHUBOT: Machine Learning Tools for the Automated Analysis of Schubertβs Lieder
This paper compares various methods for automated musical analysis, applying machine learning techniques to gain insight about the Lieder (art songs) of com- poser Franz Schubert (1797-1828). Known as a rule-breaking, individualistic, and adventurous composer, Schubert produced hundreds of emotionally-charged songs that have challenged music theorists to this day. The algorithms presented in this paper analyze the harmonies, melodies, and texts of these songs. This paper begins with an exploration of the relevant music theory and ma- chine learning algorithms (Chapter 1), alongside a general discussion of the place Schubert holds within the world of music theory. The focus is then turned to automated harmonic analysis and hierarchical decomposition of MusicXML data, presenting new algorithms for phrase-based analysis in the context of past research (Chapter 2). Melodic analysis is then discussed (Chapter 3), using unsupervised clustering methods as a complement to harmonic analyses. This paper then seeks to analyze the texts Schubert chose for his songs in the context of the songsβ relevant musical features (Chapter 4), combining natural language processing with feature extraction to pinpoint trends in Schubertβs career
A Functional Taxonomy of Music Generation Systems
Digital advances have transformed the face of automatic music generation
since its beginnings at the dawn of computing. Despite the many breakthroughs,
issues such as the musical tasks targeted by different machines and the degree
to which they succeed remain open questions. We present a functional taxonomy
for music generation systems with reference to existing systems. The taxonomy
organizes systems according to the purposes for which they were designed. It
also reveals the inter-relatedness amongst the systems. This design-centered
approach contrasts with predominant methods-based surveys and facilitates the
identification of grand challenges to set the stage for new breakthroughs.Comment: survey, music generation, taxonomy, functional survey, survey,
automatic composition, algorithmic compositio
Recommended from our members
Improving music genre classification using automatically induced harmony rules
We present a new genre classification framework using both low-level signal-based features and high-level harmony features. A state-of-the-art statistical genre classifier based on timbral features is extended using a first-order random forest containing for each genre rules derived from harmony or chord sequences. This random forest has been automatically induced, using the first-order logic induction algorithm TILDE, from a dataset, in which for each chord the degree and chord category are identified, and covering classical, jazz and pop genre classes. The audio descriptor-based genre classifier contains 206 features, covering spectral, temporal, energy, and pitch characteristics of the audio signal. The fusion of the harmony-based classifier with the extracted feature vectors is tested on three-genre subsets of the GTZAN and ISMIR04 datasets, which contain 300 and 448 recordings, respectively. Machine learning classifiers were tested using 5 Γ 5-fold cross-validation and feature selection. Results indicate that the proposed harmony-based rules combined with the timbral descriptor-based genre classification system lead to improved genre classification rates
Recommended from our members
Improving music genre classification using automatically induced harmony rules
We present a new genre classification framework using both low-level signal-based features and high-level harmony features. A state-of-the-art statistical genre classifier based on timbral features is extended using a first-order random forest containing for each genre rules derived from harmony or chord sequences. This random forest has been automatically induced, using the first-order logic induction algorithm TILDE, from a dataset, in which for each chord the degree and chord category are identified, and covering classical, jazz and pop genre classes. The audio descriptor-based genre classifier contains 206 features, covering spectral, temporal, energy, and pitch characteristics of the audio signal. The fusion of the harmony-based classifier with the extracted feature vectors is tested on three-genre subsets of the GTZAN and ISMIR04 datasets, which contain 300 and 448 recordings, respectively. Machine learning classifiers were tested using 5 Γ 5-fold cross-validation and feature selection. Results indicate that the proposed harmony-based rules combined with the timbral descriptor-based genre classification system lead to improved genre classification rates
Analysis of analysis: importance of different musical parameters for Schenkerian analysis
While criteria for Schenkerian analysis have been much discussed, such discussions have generally not been informed by data. Kirlin [Kirlin, Phillip B., 2014 βA Probabilistic Model of Hierarchical Music Analysis.β Ph.D. thesis, University of Massachusetts Amherst] has begun to fill this vacuum with a corpus of textbook Schenkerian analyses encoded using data structures suggested byYust [Yust, Jason, 2006 βFormal Models of Prolongation.β Ph.D. thesis, University of Washington] and a machine learning algorithm based on this dataset that can produce analyses with a reasonable degree of accuracy. In this work, we examine what musical features (scale degree, harmony, metrical weight) are most significant in the performance of Kirlin's algorithm.Accepted manuscrip
μμ μ μμμ λν μ‘°κ±΄λΆ μμ±μ κ°μ μ κ΄ν μ°κ΅¬: νμκ³Ό ννμ μ€μ¬μΌλ‘
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : μ΅ν©κ³ΌνκΈ°μ λνμ μ΅ν©κ³ΌνλΆ(λμ§νΈμ 보μ΅ν©μ 곡), 2023. 2. μ΄κ΅κ΅¬.Conditional generation of musical components (CGMC) creates a part of music based on partial musical components such as melody or chord. CGMC is beneficial for discovering complex relationships among musical attributes. It can also assist non-experts who face difficulties in making music. However, recent studies for CGMC are still facing two challenges in terms of generation quality and model controllability. First, the structure of the generated music is not robust. Second, only limited ranges of musical factors and tasks have been examined as targets for flexible control of generation. In this thesis, we aim to mitigate these two challenges to improve the CGMC systems. For musical structure, we focus on intuitive modeling of musical hierarchy to help the model explicitly learn musically meaningful dependency. To this end, we utilize alignment paths between the raw music data and the musical units such as notes or chords. For musical creativity, we facilitate smooth control of novel musical attributes using latent representations. We attempt to achieve disentangled representations of the intended factors by regularizing them with data-driven inductive bias. This thesis verifies the proposed approaches particularly in two representative CGMC tasks, melody harmonization and expressive performance rendering. A variety of experimental results show the possibility of the proposed approaches to expand musical creativity under stable generation quality.μμ
μ μμλ₯Ό μ‘°κ±΄λΆ μμ±νλ λΆμΌμΈ CGMCλ λ©λ‘λλ νμκ³Ό κ°μ μμ
μ μΌλΆλΆμ κΈ°λ°μΌλ‘ λλ¨Έμ§ λΆλΆμ μμ±νλ κ²μ λͺ©νλ‘ νλ€. μ΄ λΆμΌλ μμ
μ μμ κ° λ³΅μ‘ν κ΄κ³λ₯Ό νꡬνλ λ° μ©μ΄νκ³ , μμ
μ λ§λλ λ° μ΄λ €μμ κ²ͺλ λΉμ λ¬Έκ°λ€μ λμΈ μ μλ€. μ΅κ·Ό μ°κ΅¬λ€μ λ₯λ¬λ κΈ°μ μ νμ©νμ¬ CGMC μμ€ν
μ μ±λ₯μ λμ¬μλ€. νμ§λ§, μ΄λ¬ν μ°κ΅¬λ€μλ μμ§ μμ± νμ§κ³Ό μ μ΄κ°λ₯μ± μΈ‘λ©΄μμ λ κ°μ§μ νκ³μ μ΄ μλ€. λ¨Όμ , μμ±λ μμ
μ μμ
μ κ΅¬μ‘°κ° λͺ
ννμ§ μλ€. λν, μμ§ μ’μ λ²μμ μμ
μ μμ λ° ν
μ€ν¬λ§μ΄ μ μ°ν μ μ΄μ λμμΌλ‘μ νꡬλμλ€. μ΄μ λ³Έ νμλ
Όλ¬Έμμλ CGMCμ κ°μ μ μν΄ μ λ κ°μ§μ νκ³μ μ ν΄κ²°νκ³ μ νλ€. 첫 λ²μ§Έλ‘, μμ
ꡬ쑰λ₯Ό μ΄λ£¨λ μμ
μ μκ³λ₯Ό μ§κ΄μ μΌλ‘ λͺ¨λΈλ§νλ λ° μ§μ€νκ³ μ νλ€. λ³Έλ λ°μ΄ν°μ μ, νμκ³Ό κ°μ μμ
μ λ¨μ κ° μ λ ¬ κ²½λ‘λ₯Ό μ¬μ©νμ¬ λͺ¨λΈμ΄ μμ
μ μΌλ‘ μλ―Έμλ μ’
μμ±μ λͺ
ννκ² λ°°μΈ μ μλλ‘ νλ€. λ λ²μ§Έλ‘, μ μ¬ νμμ νμ©νμ¬ μλ‘μ΄ μμ
μ μμλ€μ μ μ°νκ² μ μ΄νκ³ μ νλ€. νΉν μ μ¬ νμμ΄ μλλ μμμ λν΄ ν리λλ‘ νλ ¨νκΈ° μν΄μ λΉμ§λ νΉμ μκ°μ§λ νμ΅ νλ μμν¬μ μ¬μ©νμ¬ μ μ¬ νμμ μ ννλλ‘ νλ€. λ³Έ νμλ
Όλ¬Έμμλ CGMC λΆμΌμ λνμ μΈ λ ν
μ€ν¬μΈ λ©λ‘λ νλͺ¨λμ΄μ μ΄μ
λ° ννμ μ°μ£Ό λ λλ§ ν
μ€ν¬μ λν΄ μμ λ κ°μ§ λ°©λ²λ‘ μ κ²μ¦νλ€. λ€μν μ€νμ κ²°κ³Όλ€μ ν΅ν΄ μ μν λ°©λ²λ‘ μ΄ CGMC μμ€ν
μ μμ
μ μ°½μμ±μ μμ μ μΈ μμ± νμ§λ‘ νμ₯ν μ μλ€λ κ°λ₯μ±μ μμ¬νλ€.Chapter 1 Introduction 1
1.1 Motivation 5
1.2 Definitions 8
1.3 Tasks of Interest 10
1.3.1 Generation Quality 10
1.3.2 Controllability 12
1.4 Approaches 13
1.4.1 Modeling Musical Hierarchy 14
1.4.2 Regularizing Latent Representations 16
1.4.3 Target Tasks 18
1.5 Outline of the Thesis 19
Chapter 2 Background 22
2.1 Music Generation Tasks 23
2.1.1 Melody Harmonization 23
2.1.2 Expressive Performance Rendering 25
2.2 Structure-enhanced Music Generation 27
2.2.1 Hierarchical Music Generation 27
2.2.2 Transformer-based Music Generation 28
2.3 Disentanglement Learning 29
2.3.1 Unsupervised Approaches 30
2.3.2 Supervised Approaches 30
2.3.3 Self-supervised Approaches 31
2.4 Controllable Music Generation 32
2.4.1 Score Generation 32
2.4.2 Performance Rendering 33
2.5 Summary 34
Chapter 3 Translating Melody to Chord: Structured and Flexible Harmonization of Melody with Transformer 36
3.1 Introduction 36
3.2 Proposed Methods 41
3.2.1 Standard Transformer Model (STHarm) 41
3.2.2 Variational Transformer Model (VTHarm) 44
3.2.3 Regularized Variational Transformer Model (rVTHarm) 46
3.2.4 Training Objectives 47
3.3 Experimental Settings 48
3.3.1 Datasets 49
3.3.2 Comparative Methods 50
3.3.3 Training 50
3.3.4 Metrics 51
3.4 Evaluation 56
3.4.1 Chord Coherence and Diversity 57
3.4.2 Harmonic Similarity to Human 59
3.4.3 Controlling Chord Complexity 60
3.4.4 Subjective Evaluation 62
3.4.5 Qualitative Results 67
3.4.6 Ablation Study 73
3.5 Conclusion and Future Work 74
Chapter 4 Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-supervised Learning 76
4.1 Introduction 76
4.2 Proposed Methods 79
4.2.1 Data Representation 79
4.2.2 Modeling Musical Hierarchy 80
4.2.3 Overall Network Architecture 81
4.2.4 Regularizing the Latent Variables 84
4.2.5 Overall Objective 86
4.3 Experimental Settings 87
4.3.1 Dataset and Implementation 87
4.3.2 Comparative Methods 88
4.4 Evaluation 88
4.4.1 Generation Quality 89
4.4.2 Disentangling Latent Representations 90
4.4.3 Controllability of Expressive Attributes 91
4.4.4 KL Divergence 93
4.4.5 Ablation Study 94
4.4.6 Subjective Evaluation 95
4.4.7 Qualitative Examples 97
4.4.8 Extent of Control 100
4.5 Conclusion 102
Chapter 5 Conclusion and Future Work 103
5.1 Conclusion 103
5.2 Future Work 106
5.2.1 Deeper Investigation of Controllable Factors 106
5.2.2 More Analysis of Qualitative Evaluation Results 107
5.2.3 Improving Diversity and Scale of Dataset 108
Bibliography 109
μ΄ λ‘ 137λ°
Melody generator: A device for algorithmic music construction
This article describes the development of an application for generating tonal melodies. The goal of the project is to ascertain our current understanding of tonal music by means of algorithmic music generation. The method followed consists of four stages: 1) selection of music-theoretical insights, 2) translation of these insights into a set of principles, 3) conversion of the principles into a computational model having the form of an algorithm for music generation, 4) testing the βmusic β generated by the algorithm to evaluate the adequacy of the model. As an example, the method is implemented in Melody Generator, an algorithm for generating tonal melodies. The program has a structure suited for generating, displaying, playing and storing melodies, functions which are all accessible via a dedicated interface. The actual generation of melodies, is based in part on constraints imposed by the tonal context, i.e. by meter and key, the settings of which are controlled by means of parameters on the interface. For another part, it is based upon a set of construction principles including the notion of a hierarchical organization, and the idea that melodies consist of a skeleton that may be elaborated in various ways. After these aspects were implemented as specific sub-algorithms, the device produces simple but well-structured tonal melodies
- β¦