349 research outputs found
Guitar Chords Classification Using Uncertainty Measurements of Frequency Bins
This paper presents a method to perform chord classification from recorded audio. The signal harmonics are obtained by using the Fast Fourier Transform, and timbral information is suppressed by spectral whitening. A multiple fundamental frequency estimation of whitened data is achieved by adding attenuated harmonics by a weighting function. This paper proposes a method that performs feature selection by using a thresholding of the uncertainty of all frequency bins. Those measurements under the threshold are removed from the signal in the frequency domain. This allows a reduction of 95.53% of the signal characteristics, and the other 4.47% of frequency bins are used as enhanced information for the classifier. An Artificial Neural Network was utilized to classify four types of chords: major, minor, major 7th, and minor 7th. Those, played in the twelve musical notes, give a total of 48 different chords. Two reference methods (based on Hidden Markov Models) were compared with the method proposed in this paper by having the same database for the evaluation test. In most of the performed tests, the proposed method achieved a reasonably high performance, with an accuracy of 93%
Sparse and structured decomposition of audio signals on hybrid dictionaries using musical priors
International audienceThis paper investigates the use of musical priors for sparse expansion of audio signals of music, on an overcomplete dual-resolution dictionary taken from the union of two orthonormal bases that can describe both transient and tonal components of a music audio signal. More specifically, chord and metrical structure information are used to build a structured model that takes into account dependencies between coefficients of the decomposition, both for the tonal and for the transient layer. The denoising task application is used to provide a proof of concept of the proposed musical priors. Several configurations of the model are analyzed. Evaluation on monophonic and complex polyphonic excerpts of real music signals shows that the proposed approach provides results whose quality measured by the signal-to-noise ratio is competitive with state-of-the-art approaches, and more coherent with the semantic content of the signal. A detailed analysis of the model in terms of sparsity and in terms of interpretability of the representation is also provided, and shows that the model is capable of giving a relevant and legible representation of Western tonal music audio signals
Computational Tonality Estimation: Signal Processing and Hidden Markov Models
PhDThis thesis investigates computational musical tonality estimation from an audio signal. We
present a hidden Markov model (HMM) in which relationships between chords and keys are
expressed as probabilities of emitting observable chords from a hidden key sequence. The model
is tested first using symbolic chord annotations as observations, and gives excellent global key
recognition rates on a set of Beatles songs.
The initial model is extended for audio input by using an existing chord recognition algorithm,
which allows it to be tested on a much larger database. We show that a simple model of the
upper partials in the signal improves percentage scores. We also present a variant of the HMM
which has a continuous observation probability density, but show that the discrete version gives
better performance.
Then follows a detailed analysis of the effects on key estimation and computation time of
changing the low level signal processing parameters. We find that much of the high frequency
information can be omitted without loss of accuracy, and significant computational savings can
be made by applying a threshold to the transform kernels. Results show that there is no single
ideal set of parameters for all music, but that tuning the parameters can make a difference to
accuracy.
We discuss methods of evaluating more complex tonal changes than a single global key, and
compare a metric that measures similarity to a ground truth to metrics that are rooted in music
retrieval. We show that the two measures give different results, and so recommend that the choice
of evaluation metric is determined by the intended application.
Finally we draw together our conclusions and use them to suggest areas for continuation of this
research, in the areas of tonality model development, feature extraction, evaluation methodology,
and applications of computational tonality estimation.Engineering and Physical
Sciences Research Council (EPSRC)
Automatic chord transcription from audio using computational models of musical context
PhDThis thesis is concerned with the automatic transcription of chords from audio, with an emphasis
on modern popular music. Musical context such as the key and the structural segmentation aid
the interpretation of chords in human beings. In this thesis we propose computational models
that integrate such musical context into the automatic chord estimation process.
We present a novel dynamic Bayesian network (DBN) which integrates models of metric
position, key, chord, bass note and two beat-synchronous audio features (bass and treble
chroma) into a single high-level musical context model. We simultaneously infer the most probable
sequence of metric positions, keys, chords and bass notes via Viterbi inference. Several
experiments with real world data show that adding context parameters results in a significant
increase in chord recognition accuracy and faithfulness of chord segmentation. The proposed,
most complex method transcribes chords with a state-of-the-art accuracy of 73% on the song
collection used for the 2009 MIREX Chord Detection tasks. This method is used as a baseline
method for two further enhancements.
Firstly, we aim to improve chord confusion behaviour by modifying the audio front end
processing. We compare the effect of learning chord profiles as Gaussian mixtures to the effect
of using chromagrams generated from an approximate pitch transcription method. We show
that using chromagrams from approximate transcription results in the most substantial increase
in accuracy. The best method achieves 79% accuracy and significantly outperforms the state of
the art.
Secondly, we propose a method by which chromagram information is shared between
repeated structural segments (such as verses) in a song. This can be done fully automatically
using a novel structural segmentation algorithm tailored to this task. We show that the technique
leads to a significant increase in accuracy and readability. The segmentation algorithm itself
also obtains state-of-the-art results. A method that combines both of the above enhancements
reaches an accuracy of 81%, a statistically significant improvement over the best result (74%)
in the 2009 MIREX Chord Detection tasks.Engineering and Physical Research Council U
Towards automatic extraction of harmony information from music signals
PhDIn this thesis we address the subject of automatic extraction of harmony
information from audio recordings. We focus on chord symbol recognition
and methods for evaluating algorithms designed to perform that task.
We present a novel six-dimensional model for equal tempered pitch
space based on concepts from neo-Riemannian music theory. This model
is employed as the basis of a harmonic change detection function which
we use to improve the performance of a chord recognition algorithm.
We develop a machine readable text syntax for chord symbols and
present a hand labelled chord transcription collection of 180 Beatles songs
annotated using this syntax. This collection has been made publicly available
and is already widely used for evaluation purposes in the research
community. We also introduce methods for comparing chord symbols
which we subsequently use for analysing the statistics of the transcription
collection. To ensure that researchers are able to use our transcriptions
with confidence, we demonstrate a novel alignment algorithm based on
simple audio fingerprints that allows local copies of the Beatles audio files
to be accurately aligned to our transcriptions automatically.
Evaluation methods for chord symbol recall and segmentation measures
are discussed in detail and we use our chord comparison techniques
as the basis for a novel dictionary-based chord symbol recall calculation.
At the end of the thesis, we evaluate the performance of fifteen chord
recognition algorithms (three of our own and twelve entrants to the 2009
MIREX chord detection evaluation) on the Beatles collection. Results
are presented for several different evaluation measures using a range of
evaluation parameters. The algorithms are compared with each other in
terms of performance but we also pay special attention to analysing and
discussing the benefits and drawbacks of the different evaluation methods
that are used
์ฌ์ธต ์ ๊ฒฝ๋ง ๊ธฐ๋ฐ์ ์์ ๋ฆฌ๋ ์ํธ ์๋ ์ฑ๋ณด ๋ฐ ๋ฉ๋ก๋ ์ ์ฌ๋ ํ๊ฐ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ๊ณต๊ณผ๋ํ ์ฐ์
๊ณตํ๊ณผ, 2023. 2. ์ด๊ฒฝ์.Since the composition, arrangement, and distribution of music became convenient thanks to the digitization of the music industry, the number of newly supplied music recordings is increasing. Recently, due to platform environments being established whereby anyone can become a creator, user-created music such as their songs, cover songs, and remixes is being distributed through YouTube and TikTok. With such a large volume of musical recordings, the demand to transcribe music into sheet music has always existed for musicians.
However, it requires musical knowledge and is time-consuming.
This thesis studies automatic lead sheet transcription using deep neural networks. The development of transcription artificial intelligence (AI) can greatly reduce the time and cost for people in the music industry to find or transcribe sheet music. In addition, since the conversion from music sources to the form of digital music is possible, the applications could be expanded, such as music plagiarism detection and music composition AI.
The thesis first proposes a model recognizing chords from audio signals. Chord recognition is an important task in music information retrieval since chords are highly abstract and descriptive features of music. We utilize a self-attention mechanism for chord recognition to focus on certain regions of chords. Through an attention map analysis, we visualize how attention is performed. It turns out that the model is able to divide segments of chords by utilizing the adaptive receptive field of the attention mechanism.
This thesis proposes a note-level singing melody transcription model using sequence-to-sequence transformers. Overlapping decoding is introduced to solve the problem of the context between segments being broken. Applying pitch augmentation and adding a noisy dataset with data cleansing turns out to be effective in preventing overfitting and generalizing the model performance. Ablation studies demonstrate the effects of the proposed techniques in note-level singing melody transcription, both quantitatively and qualitatively. The proposed model outperforms other models in note-level singing melody transcription performance for all the metrics considered. Finally, subjective human evaluation demonstrates that the results of the proposed models are perceived as more accurate than the results of a previous study.
Utilizing the above research results, we introduce the entire process of an automatic music lead sheet transcription. By combining various music information recognized from audio signals, we show that it is possible to transcribe lead sheets that express the core of popular music. Furthermore, we compare the results with lead sheets transcribed by musicians.
Finally, we propose a melody similarity assessment method based on self-supervised learning by applying the automatic lead sheet transcription. We present convolutional neural networks that express the melody of lead sheet transcription results in embedding space. To apply self-supervised learning, we introduce methods of generating training data by musical data augmentation techniques. Furthermore, a loss function is presented to utilize the training data. Experimental results demonstrate that the proposed model is able to detect similar melodies of popular music from plagiarism and cover song cases.์์
์ฐ์
์ ๋์งํธํ๋ฅผ ํตํด ์์
์ ์๊ณก, ํธ๊ณก ๋ฐ ์ ํต์ด ํธ๋ฆฌํด์ก๊ธฐ ๋๋ฌธ์ ์๋กญ๊ฒ ๊ณต๊ธ๋๋ ์์์ ์๊ฐ ์ฆ๊ฐํ๊ณ ์๋ค. ์ต๊ทผ์๋ ๋๊ตฌ๋ ํฌ๋ฆฌ์์ดํฐ๊ฐ ๋ ์ ์๋ ํ๋ซํผ ํ๊ฒฝ์ด ๊ตฌ์ถ๋์ด, ์ฌ์ฉ์๊ฐ ๋ง๋ ์์๊ณก, ์ปค๋ฒ๊ณก, ๋ฆฌ๋ฏน์ค ๋ฑ์ด ์ ํ๋ธ, ํฑํก์ ํตํด ์ ํต๋๊ณ ์๋ค. ์ด๋ ๊ฒ ๋ง์ ์์ ์์
์ ๋ํด, ์์
์ ์
๋ณด๋ก ์ฑ๋ณดํ๊ณ ์ ํ๋ ์์๋ ์์
๊ฐ๋ค์๊ฒ ํญ์ ์กด์ฌํ๋ค. ๊ทธ๋ฌ๋ ์
๋ณด ์ฑ๋ณด์๋ ์์
์ ์ง์์ด ํ์ํ๊ณ , ์๊ฐ๊ณผ ๋น์ฉ์ด ๋ง์ด ์์๋๋ค๋ ๋ฌธ์ ์ ์ด ์๋ค.
๋ณธ ๋
ผ๋ฌธ์์๋ ์ฌ์ธต ์ ๊ฒฝ๋ง์ ํ์ฉํ์ฌ ์์
๋ฆฌ๋ ์ํธ ์
๋ณด ์๋ ์ฑ๋ณด ๊ธฐ๋ฒ์ ์ฐ๊ตฌํ๋ค. ์ฑ๋ณด ์ธ๊ณต์ง๋ฅ์ ๊ฐ๋ฐ์ ์์
์ข
์ฌ์ ๋ฐ ์ฐ์ฃผ์๋ค์ด ์
๋ณด๋ฅผ ๊ตฌํ๊ฑฐ๋ ๋ง๋ค๊ธฐ ์ํด ์๋ชจํ๋ ์๊ฐ๊ณผ ๋น์ฉ์ ํฌ๊ฒ ์ค์ฌ ์ค ์ ์๋ค. ๋ํ ์์์์ ๋์งํธ ์
๋ณด ํํ๋ก ๋ณํ์ด ๊ฐ๋ฅํด์ง๋ฏ๋ก, ์๋ ํ์ ํ์ง, ์๊ณก ์ธ๊ณต์ง๋ฅ ํ์ต ๋ฑ ๋ค์ํ๊ฒ ํ์ฉ์ด ๊ฐ๋ฅํ๋ค.
๋ฆฌ๋ ์ํธ ์ฑ๋ณด๋ฅผ ์ํด, ๋จผ์ ์ค๋์ค ์ ํธ๋ก๋ถํฐ ์ฝ๋๋ฅผ ์ธ์ํ๋ ๋ชจ๋ธ์ ์ ์ํ๋ค. ์์
์์ ์ฝ๋๋ ํจ์ถ์ ์ด๊ณ ํํ์ ์ธ ์์
์ ์ค์ํ ํน์ง์ด๋ฏ๋ก ์ด๋ฅผ ์ธ์ํ๋ ๊ฒ์ ๋งค์ฐ ์ค์ํ๋ค. ์ฝ๋ ๊ตฌ๊ฐ ์ธ์์ ์ํด, ์ดํ
์
๋งค์ปค๋์ฆ์ ์ด์ฉํ๋ ํธ๋์คํฌ๋จธ ๊ธฐ๋ฐ ๋ชจ๋ธ์ ์ ์ํ๋ค. ์ดํ
์
์ง๋ ๋ถ์์ ํตํด, ์ดํ
์
์ด ์ค์ ๋ก ์ด๋ป๊ฒ ์ ์ฉ๋๋์ง ์๊ฐํํ๊ณ , ๋ชจ๋ธ์ด ์ฝ๋์ ๊ตฌ๊ฐ์ ๋๋๊ณ ์ธ์ํ๋ ๊ณผ์ ์ ์ดํด๋ณธ๋ค.
๊ทธ๋ฆฌ๊ณ ์ํ์ค ํฌ ์ํ์ค ํธ๋์คํฌ๋จธ๋ฅผ ์ด์ฉํ ์ํ ์์ค์ ๊ฐ์ฐฝ ๋ฉ๋ก๋ ์ฑ๋ณด ๋ชจ๋ธ์ ์ ์ํ๋ค. ๋์ฝ๋ฉ ๊ณผ์ ์์ ๊ฐ ๊ตฌ๊ฐ ์ฌ์ด์ ๋ฌธ๋งฅ ์ ๋ณด๊ฐ ๋จ์ ๋๋ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ์ค์ฒฉ ๋์ฝ๋ฉ์ ๋์
ํ๋ค. ๋ฐ์ดํฐ ๋ณํ ๊ธฐ๋ฒ์ผ๋ก ์๋์ด ๋ณํ์ ์ ์ฉํ๋ ๋ฐฉ๋ฒ๊ณผ ๋ฐ์ดํฐ ํด๋ ์ง์ ํตํด ํ์ต ๋ฐ์ดํฐ๋ฅผ ์ถ๊ฐํ๋ ๋ฐฉ๋ฒ์ ์๊ฐํ๋ค. ์ ๋ ๋ฐ ์ ์ฑ์ ์ธ ๋น๊ต๋ฅผ ํตํด ์ ์ํ ๊ธฐ๋ฒ๋ค์ด ์ฑ๋ฅ ๊ฐ์ ์ ๋์์ด ๋๋ ๊ฒ์ ํ์ธํ์๊ณ , ์ ์๋ชจ๋ธ์ด MIR-ST500 ๋ฐ์ดํฐ ์
์ ๋ํ ์ํ ์์ค์ ๊ฐ์ฐฝ ๋ฉ๋ก๋ ์ฑ๋ณด ์ฑ๋ฅ์์ ๊ฐ์ฅ ์ฐ์ํ ์ฑ๋ฅ์ ๋ณด์๋ค. ์ถ๊ฐ๋ก ์ฃผ๊ด์ ์ธ ์ฌ๋์ ํ๊ฐ์์ ์ ์ ๋ชจ๋ธ์ ์ฑ๋ณด ๊ฒฐ๊ณผ๊ฐ ์ด์ ๋ชจ๋ธ๋ณด๋ค ์ ์ ํํ๋ค๊ณ ์ธ์๋จ์ ํ์ธํ์๋ค.
์์ ์ฐ๊ตฌ์ ๊ฒฐ๊ณผ๋ฅผ ํ์ฉํ์ฌ, ์์
๋ฆฌ๋ ์ํธ ์๋ ์ฑ๋ณด์ ์ ์ฒด ๊ณผ์ ์ ์ ์ํ๋ค. ์ค๋์ค ์ ํธ๋ก๋ถํฐ ์ธ์ํ ๋ค์ํ ์์
์ ๋ณด๋ฅผ ์ข
ํฉํ์ฌ, ๋์ค ์์
์ค๋์ค ์ ํธ์ ํต์ฌ์ ํํํ๋ ๋ฆฌ๋ ์ํธ ์
๋ณด ์ฑ๋ณด๊ฐ ๊ฐ๋ฅํจ์ ๋ณด์ธ๋ค. ๊ทธ๋ฆฌ๊ณ ์ด๋ฅผ ์ ๋ฌธ๊ฐ๊ฐ ์ ์ํ ๋ฆฌ๋์ํธ์ ๋น๊ตํ์ฌ ๋ถ์ํ๋ค.
๋ง์ง๋ง์ผ๋ก ๋ฆฌ๋ ์ํธ ์
๋ณด ์๋ ์ฑ๋ณด ๊ธฐ๋ฒ์ ์์ฉํ์ฌ, ์๊ธฐ ์ง๋ ํ์ต ๊ธฐ๋ฐ ๋ฉ๋ก๋ ์ ์ฌ๋ ํ๊ฐ ๋ฐฉ๋ฒ์ ์ ์ํ๋ค. ๋ฆฌ๋ ์ํธ ์ฑ๋ณด ๊ฒฐ๊ณผ์ ๋ฉ๋ก๋๋ฅผ ์๋ฒ ๋ฉ ๊ณต๊ฐ์ ํํํ๋ ํฉ์ฑ๊ณฑ ์ ๊ฒฝ๋ง ๋ชจ๋ธ์ ์ ์ํ๋ค. ์๊ธฐ์ง๋ ํ์ต ๋ฐฉ๋ฒ๋ก ์ ์ ์ฉํ๊ธฐ ์ํด, ์์
์ ๋ฐ์ดํฐ ๋ณํ ๊ธฐ๋ฒ์ ์ ์ฉํ์ฌ ํ์ต ๋ฐ์ดํฐ๋ฅผ ์์ฑํ๋ ๋ฐฉ๋ฒ์ ์ ์ํ๋ค. ๊ทธ๋ฆฌ๊ณ ์ค๋น๋ ํ์ต ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ๋ ์ฌ์ธต ๊ฑฐ๋ฆฌ ํ์ต ์์คํจ์๋ฅผ ์ค๊ณํ๋ค. ์คํ ๊ฒฐ๊ณผ ๋ถ์์ ํตํด, ์ ์ ๋ชจ๋ธ์ด ํ์ ๋ฐ ์ปค๋ฒ์ก ์ผ์ด์ค์์ ๋์ค์์
์ ์ ์ฌํ ๋ฉ๋ก๋๋ฅผ ํ์งํ ์ ์์์ ํ์ธํ๋ค.Chapter 1 Introduction 1
1.1 Background and Motivation 1
1.2 Objectives 4
1.3 Thesis Outline 6
Chapter 2 Literature Review 7
2.1 Attention Mechanism and Transformers 7
2.1.1 Attention-based Models 7
2.1.2 Transformers with Musical Event Sequence 8
2.2 Chord Recognition 11
2.3 Note-level Singing Melody Transcription 13
2.4 Musical Key Estimation 15
2.5 Beat Tracking 17
2.6 Music Plagiarism Detection and Cover Song Identi cation 19
2.7 Deep Metric Learning and Triplet Loss 21
Chapter 3 Problem De nition 23
3.1 Lead Sheet Transcription 23
3.1.1 Chord Recognition 24
3.1.2 Singing Melody Transcription 25
3.1.3 Post-processing for Lead Sheet Representation 26
3.2 Melody Similarity Assessment 28
Chapter 4 A Bi-directional Transformer for Musical Chord Recognition 29
4.1 Methodology 29
4.1.1 Model Architecture 29
4.1.2 Self-attention in Chord Recognition 33
4.2 Experiments 35
4.2.1 Datasets 35
4.2.2 Preprocessing 35
4.2.3 Evaluation Metrics 36
4.2.4 Training 37
4.3 Results 38
4.3.1 Quantitative Evaluation 38
4.3.2 Attention Map Analysis 41
Chapter 5 Note-level Singing Melody Transcription 44
5.1 Methodology 44
5.1.1 Monophonic Note Event Sequence 44
5.1.2 Audio Features 45
5.1.3 Model Architecture 46
5.1.4 Autoregressive Decoding and Monophonic Masking 47
5.1.5 Overlapping Decoding 47
5.1.6 Pitch Augmentation 49
5.1.7 Adding Noisy Dataset with Data Cleansing 50
5.2 Experiments 51
5.2.1 Dataset 51
5.2.2 Experiment Con gurations 52
5.2.3 Evaluation Metrics 53
5.2.4 Comparison Models 54
5.2.5 Human Evaluation 55
5.3 Results 56
5.3.1 Ablation Study 56
5.3.2 Note-level Transcription Model Comparison 59
5.3.3 Transcription Performance Distribution Analysis 59
5.3.4 Fundamental Frequency (F0) Metric Evaluation 60
5.4 Qualitative Analysis 62
5.4.1 Visualization of Ablation Study 62
5.4.2 Spectrogram Analysis 65
5.4.3 Human Evaluation 67
Chapter 6 Automatic Music Lead Sheet Transcription 68
6.1 Post-processing for Lead Sheet Representation 68
6.2 Lead Sheet Transcription Results 71
Chapter 7 Melody Similarity Assessment with Self-supervised Convolutional Neural Networks 77
7.1 Methodology 77
7.1.1 Input Data Representation 77
7.1.2 Data Augmentation 78
7.1.3 Model Architecture 82
7.1.4 Loss Function 84
7.1.5 De nition of Distance between Songs 85
7.2 Experiments 87
7.2.1 Dataset 87
7.2.2 Training 88
7.2.3 Evaluation Metrics 88
7.3 Results 89
7.3.1 Quantitative Evaluation 89
7.3.2 Qualitative Evaluation 99
Chapter 8 Conclusion 107
8.1 Summary and Contributions 107
8.2 Limitations and Future Research 110
Bibliography 111
๊ตญ๋ฌธ์ด๋ก 126๋ฐ
- โฆ