67 research outputs found

    Drum extraction from polyphonic music based on a spectro-temporal model of percussive sounds

    Full text link

    일반화된 디리클레 사전확률을 이용한 비지도적 음원 분리 방법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 융합과학기술대학원 융합과학부, 2018. 2. 이교구.Music source separation aims to extract and reconstruct individual instrument sounds that constitute a mixture sound. It has received a great deal of attention recently due to its importance in the audio signal processing. In addition to its stand-alone applications such as noise reduction and instrument-wise equalization, the source separation can directly affect the performance of the various music information retrieval algorithms when used as a pre-processing. However, conventional source separation algorithms have failed to show satisfactory performance especially without the aid of spatial or musical information about the target source. To deal with this problem, we have focused on the spectral and temporal characteristics of sounds that can be observed in the spectrogram. Spectrogram decomposition is a commonly used technique to exploit such characteristicshowever, only a few simple characteristics such as sparsity were utilizable so far because most of the characteristics were difficult to be expressed in the form of algorithms. The main goal of this thesis is to investigate the possibility of using generalized Dirichlet prior to constrain spectral/temporal bases of the spectrogram decomposition algorithms. As the generalized Dirichlet prior is not only simple but also flexible in its usage, it enables us to utilize more characteristics in the spectrogram decomposition frameworks. From harmonic-percussive sound separation to harmonic instrument sound separation, we apply the generalized Dirichlet prior to various tasks and verify its flexible usage as well as fine performance.Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Task of interest 4 1.2.1 Number of channels 4 1.2.2 Utilization of side-information 5 1.3 Approach 6 1.3.1 Spectrogram decomposition with constraints 7 1.3.2 Dirichlet prior 11 1.3.3 Contribution 12 1.4 Outline of the thesis 13 Chapter 2 Theoretical background 17 2.1 Probabilistic latent component analysis 18 2.2 Non-negative matrix factorization 21 2.3 Dirichlet prior 23 2.3.1 PLCA framework 24 2.3.2 NMF framework 26 2.4 Summary 28 Chapter 3 Harmonic-Percussive Source Separation Using Harmonicity and Sparsity Constraints . . 30 3.1 Introduction 30 3.2 Proposed method 33 3.2.1 Formulation of Harmonic-Percussive Separation 33 3.2.2 Relation to Dirichlet Prior 35 3.3 Performance evaluation 37 3.3.1 Sample Problem 37 3.3.2 Qualitative Analysis 38 3.3.3 Quantitative Analysis 42 3.4 Summary 43 Chapter 4 Exploiting Continuity/Discontinuity of Basis Vectors in Spectrogram Decomposition for Harmonic-Percussive Sound Separation 46 4.1 Introduction 46 4.2 Proposed Method 51 4.2.1 Characteristics of harmonic and percussive components 51 4.2.2 Derivation of the proposed method 56 4.2.3 Algorithm interpretation 61 4.3 Performance Evaluation 62 4.3.1 Parameter setting 63 4.3.2 Toy examples 66 4.3.3 SiSEC 2015 dataset 69 4.3.4 QUASI dataset 84 4.3.5 Subjective performance evaluation 85 4.3.6 Audio demo 87 4.4 Summary 87 Chapter 5 Informed Approach to Harmonic Instrument sound Separation 89 5.1 Introduction 89 5.2 Proposed method 91 5.2.1 Excitation-filter model 92 5.2.2 Linear predictive coding 94 5.2.3 Spectrogram decomposition procedure 96 5.3 Performance evaluation 99 5.3.1 Experimental settings 99 5.3.2 Performance comparison 101 5.3.3 Envelope extraction 102 5.4 Summary 104 Chapter 6 Blind Approach to Harmonic Instrument sound Separation 105 6.1 Introduction 105 6.2 Proposed method 106 6.3 Performance evaluation 109 6.3.1 Weight optimization 109 6.3.2 Performance comparison 109 6.3.3 Effect of envelope similarity 112 6.4 Summary 114 Chapter 7 Conclusion and Future Work 115 7.1 Contributions 115 7.2 Future work 119 7.2.1 Application to multi-channel audio environment 119 7.2.2 Application to vocal separation 119 7.2.3 Application to various audio source separation tasks 120 Bibliography 121 초 록 137Docto

    Multichannel harmonic and percussive component separation by joint modeling of spatial and spectral continuity

    Get PDF
    International audienceThis paper considers the blind separation of the harmonic and percussive components of multichannel music signals. We model the contribution of each source to all mixture channels in the time-frequency domain via a spatial covariance matrix, which encodes its spatial characteristics, and a scalar spectral variance, which represents its spectral structure. We then exploit the spatial continuity and the different spectral continuity structures of harmonic and percussive components as prior information to derive maximum a posteriori (MAP) estimates of the parameters using the expectation-maximization (EM) algorithm. Experimental results over professional musical mixtures show the effectiveness of the proposed approach

    Deep Learning Methods for Instrument Separation and Recognition

    Get PDF
    This thesis explores deep learning methods for timbral information processing in polyphonic music analysis. It encompasses two primary tasks: Music Source Separation (MSS) and Instrument Recognition, with focus on applying domain knowledge and utilising dense arrangements of skip-connections in the frameworks in order to reduce the number of trainable parameters and create more efficient models. Musically-motivated Convolutional Neural Network (CNN) architectures are introduced, emphasizing kernels with vertical, square, and horizontal shapes. This design choice allows for the extraction of essential harmonic and percussive features, which enhances the discrimination of different instruments. Notably, this methodology proves valuable for Harmonic-Percussive Source Separation (HPSS) and instrument recognition tasks. A significant challenge in MSS is generalising to new instrument types and music styles. To address this, a versatile framework for adversarial unsupervised domain adaptation for source separation is proposed, particularly beneficial when labeled data for specific instruments is unavailable. The curation of the Tap & Fiddle dataset is another contribution of the research, offering mixed and isolated stem recordings of traditional Scandinavian fiddle tunes, along with foot-tapping accompaniments, fostering research in source separation and metrical expression analysis within these musical styles. Since our perception of timbre is affected in different ways by transient and stationary parts of sound, the research investigates the potential of Transient Stationary-Noise Decomposition (TSND) as a preprocessing step for frame-level recognition. A method that performs TSND of spectrograms and feeds the decomposed spectrograms to a neural classifier is proposed. Furthermore, this thesis introduces a novel deep learning-based approach for pitch streaming, treating the task as a note-level instrument classification. Such an approach is modular, meaning that it can also successfully stream predicted note-events and not only labelled ground truth note-event information to corresponding instruments. Therefore, the proposed pitch streaming method enables third-party multi-pitch estimation algorithms to perform multi-instrument AMT

    Automatic characterization and generation of music loops and instrument samples for electronic music production

    Get PDF
    Repurposing audio material to create new music - also known as sampling - was a foundation of electronic music and is a fundamental component of this practice. Currently, large-scale databases of audio offer vast collections of audio material for users to work with. The navigation on these databases is heavily focused on hierarchical tree directories. Consequently, sound retrieval is tiresome and often identified as an undesired interruption in the creative process. We address two fundamental methods for navigating sounds: characterization and generation. Characterizing loops and one-shots in terms of instruments or instrumentation allows for organizing unstructured collections and a faster retrieval for music-making. The generation of loops and one-shot sounds enables the creation of new sounds not present in an audio collection through interpolation or modification of the existing material. To achieve this, we employ deep-learning-based data-driven methodologies for classification and generation.Repurposing audio material to create new music - also known as sampling - was a foundation of electronic music and is a fundamental component of this practice. Currently, large-scale databases of audio offer vast collections of audio material for users to work with. The navigation on these databases is heavily focused on hierarchical tree directories. Consequently, sound retrieval is tiresome and often identified as an undesired interruption in the creative process. We address two fundamental methods for navigating sounds: characterization and generation. Characterizing loops and one-shots in terms of instruments or instrumentation allows for organizing unstructured collections and a faster retrieval for music-making. The generation of loops and one-shot sounds enables the creation of new sounds not present in an audio collection through interpolation or modification of the existing material. To achieve this, we employ deep-learning-based data-driven methodologies for classification and generation

    調波音打楽器音分離による歌声のスペクトルゆらぎに基づく音楽信号処理の研究

    Get PDF
    学位の種別:課程博士University of Tokyo(東京大学
    corecore