855 research outputs found
Perceptual and automated estimates of infringement in 40 music copyright cases
Music copyright infringement lawsuits implicate millions of dollars in damages and costs of litigation. There are, however, few objective measures by which to evaluate these claims. Recent music information retrieval research has proposed objective algorithms to automatically detect musical similarity, which might reduce subjectivity in music copyright infringement decisions, but there remains minimal relevant perceptual data despite its crucial role in copyright law. We collected perceptual data from 51 participants for 40 adjudicated copyright cases from 1915β2018 in 7 legal jurisdictions (USA, UK, Australia, New Zealand, Japan, Peopleβs Republic of China, and Taiwan). Each case was represented by three different versions: either full audio, melody only (MIDI), or lyrics only (text). Due to the historical emphasis in legal opinions on melody as the key criterion for deciding infringement, we originally predicted that listening to melody-only versions would result in perceptual judgments that more closely matched actual past legal decisions. However, as in our preliminary study of 17 court decisions (Yuan et al., 2020), our results did not match these predictions. Participants listening to full audio outperformed not only the melody-only condition, but also automated algorithms designed to calculate musical similarity (with maximal accuracy of 83% vs. 75%, respectively). Meanwhile, lyrics-only conditions performed at chance levels. Analysis of outlier cases suggests that music, lyrics, and contextual factors can interact in complex ways difficult to capture using quantitative metrics. We propose directions for further investigation including using larger and more diverse samples of cases, enhanced methods, and adapting our perceptual experiment method to avoid relying on ground truth data only from court decisions (which may be subject to errors and selection bias). Our results contribute data and methods to inform practical debates relevant to music copyright law throughout the world, such as the question of whether, and the extent to which, judges and jurors should be allowed to hear published sound recordings of the disputed works in determining musical similarity. Our results ultimately suggest that while automated algorithms are unlikely to replace human judgments, they may help to supplement them
09051 Abstracts Collection -- Knowledge representation for intelligent music processing
From the twenty-fifth to the thirtieth of January, 2009, the
Dagstuhl Seminar 09051 on ``Knowledge representation for intelligent music
processing\u27\u27 was held in Schloss Dagstuhl~--~Leibniz Centre for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts
of the presentations and demos given during the seminar as well as
plenary presentations, reports of workshop discussions, results and
ideas are put together in this paper. The first section describes the
seminar topics and goals in general, followed by plenary `stimulus\u27
papers, followed by reports and abstracts arranged by workshop
followed finally by some concluding materials providing views of both
the seminar itself and also forward to the longer-term goals of the
discipline. Links to extended abstracts, full papers and supporting
materials are provided, if available.
The organisers thank David Lewis for editing these proceedings
Scalable cover song identification based on melody indexing
In this work, we describe an efficient method for cover song identification focusing our target on pop and rock music genres. The procedure proposed is based on the fact that every pop/rock song has usually a main melody or a easily recognizing theme inside. Usually this theme or this melody, even if the cover song is really different from the original, is present in every version of the original song. This means that if we can identify the melody in each song we can identify also the original songope
μ¬μΈ΅ μ κ²½λ§ κΈ°λ°μ μμ 리λ μνΈ μλ μ±λ³΄ λ° λ©λ‘λ μ μ¬λ νκ°
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : 곡과λν μ°μ
곡νκ³Ό, 2023. 2. μ΄κ²½μ.Since the composition, arrangement, and distribution of music became convenient thanks to the digitization of the music industry, the number of newly supplied music recordings is increasing. Recently, due to platform environments being established whereby anyone can become a creator, user-created music such as their songs, cover songs, and remixes is being distributed through YouTube and TikTok. With such a large volume of musical recordings, the demand to transcribe music into sheet music has always existed for musicians.
However, it requires musical knowledge and is time-consuming.
This thesis studies automatic lead sheet transcription using deep neural networks. The development of transcription artificial intelligence (AI) can greatly reduce the time and cost for people in the music industry to find or transcribe sheet music. In addition, since the conversion from music sources to the form of digital music is possible, the applications could be expanded, such as music plagiarism detection and music composition AI.
The thesis first proposes a model recognizing chords from audio signals. Chord recognition is an important task in music information retrieval since chords are highly abstract and descriptive features of music. We utilize a self-attention mechanism for chord recognition to focus on certain regions of chords. Through an attention map analysis, we visualize how attention is performed. It turns out that the model is able to divide segments of chords by utilizing the adaptive receptive field of the attention mechanism.
This thesis proposes a note-level singing melody transcription model using sequence-to-sequence transformers. Overlapping decoding is introduced to solve the problem of the context between segments being broken. Applying pitch augmentation and adding a noisy dataset with data cleansing turns out to be effective in preventing overfitting and generalizing the model performance. Ablation studies demonstrate the effects of the proposed techniques in note-level singing melody transcription, both quantitatively and qualitatively. The proposed model outperforms other models in note-level singing melody transcription performance for all the metrics considered. Finally, subjective human evaluation demonstrates that the results of the proposed models are perceived as more accurate than the results of a previous study.
Utilizing the above research results, we introduce the entire process of an automatic music lead sheet transcription. By combining various music information recognized from audio signals, we show that it is possible to transcribe lead sheets that express the core of popular music. Furthermore, we compare the results with lead sheets transcribed by musicians.
Finally, we propose a melody similarity assessment method based on self-supervised learning by applying the automatic lead sheet transcription. We present convolutional neural networks that express the melody of lead sheet transcription results in embedding space. To apply self-supervised learning, we introduce methods of generating training data by musical data augmentation techniques. Furthermore, a loss function is presented to utilize the training data. Experimental results demonstrate that the proposed model is able to detect similar melodies of popular music from plagiarism and cover song cases.μμ
μ°μ
μ λμ§νΈνλ₯Ό ν΅ν΄ μμ
μ μ곑, νΈκ³‘ λ° μ ν΅μ΄ νΈλ¦¬ν΄μ‘κΈ° λλ¬Έμ μλ‘κ² κ³΅κΈλλ μμμ μκ° μ¦κ°νκ³ μλ€. μ΅κ·Όμλ λꡬλ ν¬λ¦¬μμ΄ν°κ° λ μ μλ νλ«νΌ νκ²½μ΄ κ΅¬μΆλμ΄, μ¬μ©μκ° λ§λ μμ곑, 컀λ²κ³‘, λ¦¬λ―Ήμ€ λ±μ΄ μ νλΈ, ν±ν‘μ ν΅ν΄ μ ν΅λκ³ μλ€. μ΄λ κ² λ§μ μμ μμ
μ λν΄, μμ
μ μ
λ³΄λ‘ μ±λ³΄νκ³ μ νλ μμλ μμ
κ°λ€μκ² νμ μ‘΄μ¬νλ€. κ·Έλ¬λ μ
보 μ±λ³΄μλ μμ
μ μ§μμ΄ νμνκ³ , μκ°κ³Ό λΉμ©μ΄ λ§μ΄ μμλλ€λ λ¬Έμ μ μ΄ μλ€.
λ³Έ λ
Όλ¬Έμμλ μ¬μΈ΅ μ κ²½λ§μ νμ©νμ¬ μμ
리λ μνΈ μ
보 μλ μ±λ³΄ κΈ°λ²μ μ°κ΅¬νλ€. μ±λ³΄ μΈκ³΅μ§λ₯μ κ°λ°μ μμ
μ’
μ¬μ λ° μ°μ£Όμλ€μ΄ μ
보λ₯Ό ꡬνκ±°λ λ§λ€κΈ° μν΄ μλͺ¨νλ μκ°κ³Ό λΉμ©μ ν¬κ² μ€μ¬ μ€ μ μλ€. λν μμμμ λμ§νΈ μ
보 ννλ‘ λ³νμ΄ κ°λ₯ν΄μ§λ―λ‘, μλ νμ νμ§, μ곑 μΈκ³΅μ§λ₯ νμ΅ λ± λ€μνκ² νμ©μ΄ κ°λ₯νλ€.
리λ μνΈ μ±λ³΄λ₯Ό μν΄, λ¨Όμ μ€λμ€ μ νΈλ‘λΆν° μ½λλ₯Ό μΈμνλ λͺ¨λΈμ μ μνλ€. μμ
μμ μ½λλ ν¨μΆμ μ΄κ³ ννμ μΈ μμ
μ μ€μν νΉμ§μ΄λ―λ‘ μ΄λ₯Ό μΈμνλ κ²μ λ§€μ° μ€μνλ€. μ½λ κ΅¬κ° μΈμμ μν΄, μ΄ν
μ
맀컀λμ¦μ μ΄μ©νλ νΈλμ€ν¬λ¨Έ κΈ°λ° λͺ¨λΈμ μ μνλ€. μ΄ν
μ
μ§λ λΆμμ ν΅ν΄, μ΄ν
μ
μ΄ μ€μ λ‘ μ΄λ»κ² μ μ©λλμ§ μκ°ννκ³ , λͺ¨λΈμ΄ μ½λμ ꡬκ°μ λλκ³ μΈμνλ κ³Όμ μ μ΄ν΄λ³Έλ€.
κ·Έλ¦¬κ³ μνμ€ ν¬ μνμ€ νΈλμ€ν¬λ¨Έλ₯Ό μ΄μ©ν μν μμ€μ κ°μ°½ λ©λ‘λ μ±λ³΄ λͺ¨λΈμ μ μνλ€. λμ½λ© κ³Όμ μμ κ° κ΅¬κ° μ¬μ΄μ λ¬Έλ§₯ μ λ³΄κ° λ¨μ λλ λ¬Έμ λ₯Ό ν΄κ²°νκΈ° μν΄ μ€μ²© λμ½λ©μ λμ
νλ€. λ°μ΄ν° λ³ν κΈ°λ²μΌλ‘ μλμ΄ λ³νμ μ μ©νλ λ°©λ²κ³Ό λ°μ΄ν° ν΄λ μ§μ ν΅ν΄ νμ΅ λ°μ΄ν°λ₯Ό μΆκ°νλ λ°©λ²μ μκ°νλ€. μ λ λ° μ μ±μ μΈ λΉκ΅λ₯Ό ν΅ν΄ μ μν κΈ°λ²λ€μ΄ μ±λ₯ κ°μ μ λμμ΄ λλ κ²μ νμΈνμκ³ , μ μλͺ¨λΈμ΄ MIR-ST500 λ°μ΄ν° μ
μ λν μν μμ€μ κ°μ°½ λ©λ‘λ μ±λ³΄ μ±λ₯μμ κ°μ₯ μ°μν μ±λ₯μ 보μλ€. μΆκ°λ‘ μ£Όκ΄μ μΈ μ¬λμ νκ°μμ μ μ λͺ¨λΈμ μ±λ³΄ κ²°κ³Όκ° μ΄μ λͺ¨λΈλ³΄λ€ μ μ ννλ€κ³ μΈμλ¨μ νμΈνμλ€.
μμ μ°κ΅¬μ κ²°κ³Όλ₯Ό νμ©νμ¬, μμ
리λ μνΈ μλ μ±λ³΄μ μ 체 κ³Όμ μ μ μνλ€. μ€λμ€ μ νΈλ‘λΆν° μΈμν λ€μν μμ
μ 보λ₯Ό μ’
ν©νμ¬, λμ€ μμ
μ€λμ€ μ νΈμ ν΅μ¬μ νννλ 리λ μνΈ μ
보 μ±λ³΄κ° κ°λ₯ν¨μ 보μΈλ€. κ·Έλ¦¬κ³ μ΄λ₯Ό μ λ¬Έκ°κ° μ μν 리λμνΈμ λΉκ΅νμ¬ λΆμνλ€.
λ§μ§λ§μΌλ‘ 리λ μνΈ μ
보 μλ μ±λ³΄ κΈ°λ²μ μμ©νμ¬, μκΈ° μ§λ νμ΅ κΈ°λ° λ©λ‘λ μ μ¬λ νκ° λ°©λ²μ μ μνλ€. 리λ μνΈ μ±λ³΄ κ²°κ³Όμ λ©λ‘λλ₯Ό μλ² λ© κ³΅κ°μ νννλ ν©μ±κ³± μ κ²½λ§ λͺ¨λΈμ μ μνλ€. μκΈ°μ§λ νμ΅ λ°©λ²λ‘ μ μ μ©νκΈ° μν΄, μμ
μ λ°μ΄ν° λ³ν κΈ°λ²μ μ μ©νμ¬ νμ΅ λ°μ΄ν°λ₯Ό μμ±νλ λ°©λ²μ μ μνλ€. κ·Έλ¦¬κ³ μ€λΉλ νμ΅ λ°μ΄ν°λ₯Ό νμ©νλ μ¬μΈ΅ 거리 νμ΅ μμ€ν¨μλ₯Ό μ€κ³νλ€. μ€ν κ²°κ³Ό λΆμμ ν΅ν΄, μ μ λͺ¨λΈμ΄ νμ λ° μ»€λ²μ‘ μΌμ΄μ€μμ λμ€μμ
μ μ μ¬ν λ©λ‘λλ₯Ό νμ§ν μ μμμ νμΈνλ€.Chapter 1 Introduction 1
1.1 Background and Motivation 1
1.2 Objectives 4
1.3 Thesis Outline 6
Chapter 2 Literature Review 7
2.1 Attention Mechanism and Transformers 7
2.1.1 Attention-based Models 7
2.1.2 Transformers with Musical Event Sequence 8
2.2 Chord Recognition 11
2.3 Note-level Singing Melody Transcription 13
2.4 Musical Key Estimation 15
2.5 Beat Tracking 17
2.6 Music Plagiarism Detection and Cover Song Identi cation 19
2.7 Deep Metric Learning and Triplet Loss 21
Chapter 3 Problem De nition 23
3.1 Lead Sheet Transcription 23
3.1.1 Chord Recognition 24
3.1.2 Singing Melody Transcription 25
3.1.3 Post-processing for Lead Sheet Representation 26
3.2 Melody Similarity Assessment 28
Chapter 4 A Bi-directional Transformer for Musical Chord Recognition 29
4.1 Methodology 29
4.1.1 Model Architecture 29
4.1.2 Self-attention in Chord Recognition 33
4.2 Experiments 35
4.2.1 Datasets 35
4.2.2 Preprocessing 35
4.2.3 Evaluation Metrics 36
4.2.4 Training 37
4.3 Results 38
4.3.1 Quantitative Evaluation 38
4.3.2 Attention Map Analysis 41
Chapter 5 Note-level Singing Melody Transcription 44
5.1 Methodology 44
5.1.1 Monophonic Note Event Sequence 44
5.1.2 Audio Features 45
5.1.3 Model Architecture 46
5.1.4 Autoregressive Decoding and Monophonic Masking 47
5.1.5 Overlapping Decoding 47
5.1.6 Pitch Augmentation 49
5.1.7 Adding Noisy Dataset with Data Cleansing 50
5.2 Experiments 51
5.2.1 Dataset 51
5.2.2 Experiment Con gurations 52
5.2.3 Evaluation Metrics 53
5.2.4 Comparison Models 54
5.2.5 Human Evaluation 55
5.3 Results 56
5.3.1 Ablation Study 56
5.3.2 Note-level Transcription Model Comparison 59
5.3.3 Transcription Performance Distribution Analysis 59
5.3.4 Fundamental Frequency (F0) Metric Evaluation 60
5.4 Qualitative Analysis 62
5.4.1 Visualization of Ablation Study 62
5.4.2 Spectrogram Analysis 65
5.4.3 Human Evaluation 67
Chapter 6 Automatic Music Lead Sheet Transcription 68
6.1 Post-processing for Lead Sheet Representation 68
6.2 Lead Sheet Transcription Results 71
Chapter 7 Melody Similarity Assessment with Self-supervised Convolutional Neural Networks 77
7.1 Methodology 77
7.1.1 Input Data Representation 77
7.1.2 Data Augmentation 78
7.1.3 Model Architecture 82
7.1.4 Loss Function 84
7.1.5 De nition of Distance between Songs 85
7.2 Experiments 87
7.2.1 Dataset 87
7.2.2 Training 88
7.2.3 Evaluation Metrics 88
7.3 Results 89
7.3.1 Quantitative Evaluation 89
7.3.2 Qualitative Evaluation 99
Chapter 8 Conclusion 107
8.1 Summary and Contributions 107
8.2 Limitations and Future Research 110
Bibliography 111
κ΅λ¬Έμ΄λ‘ 126λ°
A Heuristic for Distance Fusion in Cover Song Identification
In this paper, we propose a method to integrate the results of different cover song identification algorithms into one single measure which, on the average, gives better results than initial algorithms. The fusion of the different distance measures is made by projecting all the measures in a multi-dimensional space, where the dimensionality of this space is the number of the considered distances. In our experiments, we test two distance measures, namely the Dynamic Time Warping and the Qmax measure when applied in different combinations to two features, namely a Salience feature and a Harmonic Pitch Class Profile (HPCP). While the HPCP is meant to extract purely harmonic descriptions, in fact, the Salience allows to better discern melodic differences. It is shown that the combination of two or more distance measure improves the overall performance
Methodological contributions by means of machine learning methods for automatic music generation and classification
189 p.Ikerketa lan honetan bi gai nagusi landu dira: musikaren sorkuntza automatikoa eta sailkapena. Musikaren sorkuntzarako bertso doinuen corpus bat hartu da abiapuntu moduan doinu ulergarri berriak sortzeko gai den metodo bat sortzeko. Doinuei ulergarritasuna hauen barnean dauden errepikapen egiturek ematen dietela suposatu da, eta metodoaren hiru bertsio nagusi aurkeztu dira, bakoitzean errepikapen horien definizio ezberdin bat erabiliz.Musikaren sailkapen automatikoan hiru ataza garatu dira: generoen sailkapena, familia melodikoen taldekatzea eta konposatzaileen identifikazioa. Musikaren errepresentazio ezberdinak erabili dira ataza bakoitzerako, eta ikasketa automatikoko hainbat teknika ere probatu dira, emaitzarik hoberenak zeinek ematen dituen aztertzeko.Gainbegiratutako sailkapenaren alorrean ere binakako sailkapenaren gainean lana egin da, aurretik existitzen zen metodo bat optimizatuz. Hainbat datu baseren gainean probatu da garatutako teknika, baita konposatzaile klasikoen piezen ezaugarriez osatutako datu base batean ere
- β¦