192 research outputs found

    잡음에 강인한 음성 구간 검출과 음성 향상을 위한 딥 러닝 기반 기법 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 김남수.Over the past decades, a number of approaches have been proposed to improve the performances of voice activity detection (VAD) and speech enhancement algorithms which are crucial for speech communication and speech signal processing systems. In particular, the increasing use of machine learning-based techniques has led to the more robust algorithms in low SNR conditions. Among them, the deep neural network (DNN) has been one of the most popular techniques. While the DNN-based technique is successfully applied to these tasks, the characteristics of VAD and speech enhancement tasks are not fully incorporated to the DNN structures and objective functions. In this thesis, we propose the novel training schemes and post-filter for DNN-based VAD and speech enhancement. Unlike algorithms with basic DNN-based framework, the proposed algorithm combines the knowledge from signal processing and machine learning society to develop the improve DNN-based VAD and speech enhancement algorithm. In the following chapters, the environmental mismatch problem in the VAD area is compensated by applying multi-task learning to the DNN-based VAD. Also, the DNN-based framework is proposed in the speech enhancement scenario and the novel objective function and post-filter which are derived from the characteristics on human auditory perception improve the DNN-based speech enhancement algorithm. In the VAD task, the DNN-based algorithm was recently proposed and outperformed the traditional and other machine learning-based VAD algorithms. However, the performance of the DNN-based algorithm sometimes deteriorates when the training and test environments are not matched with each other. In order to increase the performance of the DNN-based VAD in unseen environments, we adopt the multi-task learning (MTL) framework which consists of the primary VAD and subsidiary feature enhancement tasks. By employing the MTL framework, the DNN learns the denoising function in the shared hidden layers that is useful to maintain the VAD performance in mismatched noise conditions. Second, the DNN-based framework is applied to the speech enhancement by considering it as a regression task. The encoding vector of the conventional nonnegative matrix factorization (NMF)-based algorithm is estimated by the proposed DNN and the performance of the DNN-based algorithm is compared to the conventional NMF-based algorithm. Third, the perceptually motivated objective function is proposed for the DNN-based speech enhancement. In the proposed technique, a new objective function which consists of the Mel-scale weighted mean square error, temporal and spectral variations similarities between the enhanced and clean speech is employed in the DNN training stage. The proposed objective function helps to compute the gradients based on a perceptually motivated non-linear frequency scale and alleviates the over-smoothness of the estimated speech. Furthermore, the post-filter which adjusts the variance over frequency bins further compensates the lack of contrasts between spectral peaks and valleys in the enhanced speech. The conventional GV equalization post-filters do not consider the spectral dynamics over frequency bins. To consider the contrast between spectral peaks and valleys in each enhanced speech frames, the proposed algorithm matches the variance over coefficients in the log-power spectra domain. Finally, in the speech enhancement task, an integrated technique using the proposed perceptually motivated objective function and the post-filter is described. In matched and mismatched noise conditions, the performance results of the conventional and proposed algorithm are discussed. Also, the subjective preference test result of these algorithms is also provided.1 Introduction 1 2 Conventional Approaches for Speech Enhancement 7 2.1 NMF-Based Speech Enhancement 7 3 Deep Neural Networks 13 3.1 Introduction 13 3.2 Objective Function 14 3.3 Stochastic Gradient Descent 16 4 DNN-Based Voiced Activity Detection with Multi-Task Learning Framework 19 4.1 Introduction 19 4.2 DNN-Based VAD Algorithm 21 4.3 DNN-Based VAD with MTL framework 23 4.4 Experimental Results 26 4.4.1 Experiments in Matched Noise Conditions 26 4.4.2 Experiments in Mismatched Noise Conditions 28 4.5 Summary 30 5 NMF-based Speech Enhancement Using Deep Neural Network 35 5.1 Introduction 35 5.2 Encoding Vector Estimation Using DNN 37 5.3 Experiments 42 5.4 Summary 47 6 DNN-Based Monaural Speech Enhancement with Temporal and Spectral Variations Equalization 49 6.1 Introduction 49 6.2 Conventional DNN-Based Speech Enhancement 53 6.2.1 Training Stage 53 6.2.2 Test Stage 55 6.3 Perceptually-Motivated Criteria 56 6.3.1 Perceptually Motivated Objective Function 56 6.3.2 Mel-Scale Weighted Mean Square Error 58 6.3.3 Temporal Variation Similarity 58 6.3.4 Spectral Variation Similarity 61 6.3.5 DNN Training with the Proposed Objective Function 62 6.4 Experiments 62 6.4.1 Performance Evaluation with Varying Weight Parameters 64 6.4.2 Performance Evaluation in Matched Noise Conditions 64 6.4.3 Performance Evaluation in Mismatched Noise Conditions 66 6.4.4 Comparison Between Variation Analysis Method 66 6.4.5 Subjective Test Results 67 6.5 Summary 68 7 Spectral Variance Equalization Post-filter for DNN-Based Speech Enhancement 75 7.1 Introduction 75 7.2 GV Equalization Post-Filter 76 7.3 Spectral Variance(SV) Equalization Post-Filter 77 7.4 Experiments 78 7.4.1 Objective Test Results 78 7.4.2 Subjective Test Results 79 7.5 Summary 81 8 Conclusions 83 Bibliography 85 Appendix 95 요약 97Docto

    Approaches to better context modeling and categorization

    Get PDF

    Music Information Retrieval: An Inspirational Guide to Transfer from Related Disciplines

    Get PDF
    The emerging field of Music Information Retrieval (MIR) has been influenced by neighboring domains in signal processing and machine learning, including automatic speech recognition, image processing and text information retrieval. In this contribution, we start with concrete examples for methodology transfer between speech and music processing, oriented on the building blocks of pattern recognition: preprocessing, feature extraction, and classification/decoding. We then assume a higher level viewpoint when describing sources of mutual inspiration derived from text and image information retrieval. We conclude that dealing with the peculiarities of music in MIR research has contributed to advancing the state-of-the-art in other fields, and that many future challenges in MIR are strikingly similar to those that other research areas have been facing

    Single channel audio separation using deep neural networks and matrix factorizations

    Get PDF
    PhD ThesisSource Separation has become a significant research topic in the signal processing community and the machine learning area. Due to numerous applications, such as automatic speech recognition and speech communication, separation of target speech from the mixed signal is of great importance. In many practical applications, speech separation from a single recorder is most desirable from an application standpoint. In this thesis, two novel approaches have been proposed to address this single channel audio separation problem. This thesis first reviews traditional approaches for single channel source separation, and later elicits a generic approach, which is more capable of feature learning, i.e. deep graphical models. In the first part of this thesis, a novel approach based on matrix factorization and hierarchical model has been proposed. In this work, an artificial stereo mixture is formulated to provide extra information. In addition, a hybrid framework that combines the generalized Expectation-Maximization algorithm with a multiplicative update rule is proposed to optimize the parameters of a matrix factorization based approach to approximatively separate the mixture. Furthermore, a hierarchical model based on an extreme learning machine is developed to check the validity of the approximately separated sources followed by an energy minimization method to further improve the quality of the separated sources by generating a time-frequency mask. Various experiments have been conducted and the obtained results have shown that the proposed approach outperforms conventional approaches not only in reduction of computational complexity, but also the separation performance. In the second part, a deep neural network based ensemble system is proposed. In this work, the complementary property of different features are fully explored by ‘wide’ and ‘forward’ ensemble system. In addition, instead of using the features learned from the output layer, the features learned from the penultimate layer are investigated. The final embedded features are classified with an extreme learning machine to generate a binary mask to separate a mixed signal. The experiment focuses on speech in the presence of music and the obtained results demonstrated that the proposed ensemble system has the ability to explore the complementary property of various features thoroughly under various conditions with promising separation performance

    Proceedings of the Detection and Classification of Acoustic Scenes and Events 2016 Workshop (DCASE2016)

    Get PDF

    Single-channel source separation using non-negative matrix factorization

    Get PDF
    corecore