80 research outputs found

    ๊ฐ•์ธํ•œ ์Œ์„ฑ์ธ์‹์„ ์œ„ํ•œ DNN ๊ธฐ๋ฐ˜ ์Œํ–ฅ ๋ชจ๋ธ๋ง

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2019. 2. ๊น€๋‚จ์ˆ˜.๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ฐ•์ธํ•œ ์Œ์„ฑ์ธ์‹์„ ์œ„ํ•ด์„œ DNN์„ ํ™œ์šฉํ•œ ์Œํ–ฅ ๋ชจ๋ธ๋ง ๊ธฐ๋ฒ•๋“ค์„ ์ œ์•ˆํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ํฌ๊ฒŒ ์„ธ ๊ฐ€์ง€์˜ DNN ๊ธฐ๋ฐ˜ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋Š” DNN์ด ๊ฐ€์ง€๊ณ  ์žˆ๋Š” ์žก์Œ ํ™˜๊ฒฝ์— ๋Œ€ํ•œ ๊ฐ•์ธํ•จ์„ ๋ณด์กฐ ํŠน์ง• ๋ฒกํ„ฐ๋“ค์„ ํ†ตํ•˜์—ฌ ์ตœ๋Œ€๋กœ ํ™œ์šฉํ•˜๋Š” ์Œํ–ฅ ๋ชจ๋ธ๋ง ๊ธฐ๋ฒ•์ด๋‹ค. ์ด๋Ÿฌํ•œ ๊ธฐ๋ฒ•์„ ํ†ตํ•˜์—ฌ DNN์€ ์™œ๊ณก๋œ ์Œ์„ฑ, ๊นจ๋—ํ•œ ์Œ์„ฑ, ์žก์Œ ์ถ”์ •์น˜, ๊ทธ๋ฆฌ๊ณ  ์Œ์†Œ ํƒ€๊ฒŸ๊ณผ์˜ ๋ณต์žกํ•œ ๊ด€๊ณ„๋ฅผ ๋ณด๋‹ค ์›ํ™œํ•˜๊ฒŒ ํ•™์Šตํ•˜๊ฒŒ ๋œ๋‹ค. ๋ณธ ๊ธฐ๋ฒ•์€ Aurora-5 DB ์—์„œ ๊ธฐ์กด์˜ ๋ณด์กฐ ์žก์Œ ํŠน์ง• ๋ฒกํ„ฐ๋ฅผ ํ™œ์šฉํ•œ ๋ชจ๋ธ ์ ์‘ ๊ธฐ๋ฒ•์ธ ์žก์Œ ์ธ์ง€ ํ•™์Šต (noise-aware training, NAT) ๊ธฐ๋ฒ•์„ ํฌ๊ฒŒ ๋›ฐ์–ด๋„˜๋Š” ์„ฑ๋Šฅ์„ ๋ณด์˜€๋‹ค. ๋‘ ๋ฒˆ์งธ๋Š” DNN์„ ํ™œ์šฉํ•œ ๋‹ค ์ฑ„๋„ ํŠน์ง• ํ–ฅ์ƒ ๊ธฐ๋ฒ•์ด๋‹ค. ๊ธฐ์กด์˜ ๋‹ค ์ฑ„๋„ ์‹œ๋‚˜๋ฆฌ์˜ค์—์„œ๋Š” ์ „ํ†ต์ ์ธ ์‹ ํ˜ธ ์ฒ˜๋ฆฌ ๊ธฐ๋ฒ•์ธ ๋น”ํฌ๋ฐ ๊ธฐ๋ฒ•์„ ํ†ตํ•˜์—ฌ ํ–ฅ์ƒ๋œ ๋‹จ์ผ ์†Œ์Šค ์Œ์„ฑ ์‹ ํ˜ธ๋ฅผ ์ถ”์ถœํ•˜๊ณ  ๊ทธ๋ฅผ ํ†ตํ•˜์—ฌ ์Œ์„ฑ์ธ์‹์„ ์ˆ˜ํ–‰ํ•œ๋‹ค. ์šฐ๋ฆฌ๋Š” ๊ธฐ์กด์˜ ๋น”ํฌ๋ฐ ์ค‘์—์„œ ๊ฐ€์žฅ ๊ธฐ๋ณธ์  ๊ธฐ๋ฒ• ์ค‘ ํ•˜๋‚˜์ธ delay-and-sum (DS) ๋น”ํฌ๋ฐ ๊ธฐ๋ฒ•๊ณผ DNN์„ ๊ฒฐํ•ฉํ•œ ๋‹ค ์ฑ„๋„ ํŠน์ง• ํ–ฅ์ƒ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” DNN์€ ์ค‘๊ฐ„ ๋‹จ๊ณ„ ํŠน์ง• ๋ฒกํ„ฐ๋ฅผ ํ™œ์šฉํ•œ ๊ณต๋™ ํ•™์Šต ๊ธฐ๋ฒ•์„ ํ†ตํ•˜์—ฌ ์™œ๊ณก๋œ ๋‹ค ์ฑ„๋„ ์ž…๋ ฅ ์Œ์„ฑ ์‹ ํ˜ธ๋“ค๊ณผ ๊นจ๋—ํ•œ ์Œ์„ฑ ์‹ ํ˜ธ์™€์˜ ๊ด€๊ณ„๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ํ‘œํ˜„ํ•œ๋‹ค. ์ œ์•ˆ๋œ ๊ธฐ๋ฒ•์€ multichannel wall street journal audio visual (MC-WSJAV) corpus์—์„œ์˜ ์‹คํ—˜์„ ํ†ตํ•˜์—ฌ, ๊ธฐ์กด์˜ ๋‹ค์ฑ„๋„ ํ–ฅ์ƒ ๊ธฐ๋ฒ•๋“ค๋ณด๋‹ค ๋›ฐ์–ด๋‚œ ์„ฑ๋Šฅ์„ ๋ณด์ž„์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ถˆํ™•์ •์„ฑ ์ธ์ง€ ํ•™์Šต (Uncertainty-aware training, UAT) ๊ธฐ๋ฒ•์ด๋‹ค. ์œ„์—์„œ ์†Œ๊ฐœ๋œ ๊ธฐ๋ฒ•๋“ค์„ ํฌํ•จํ•˜์—ฌ ๊ฐ•์ธํ•œ ์Œ์„ฑ์ธ์‹์„ ์œ„ํ•œ ๊ธฐ์กด์˜ DNN ๊ธฐ๋ฐ˜ ๊ธฐ๋ฒ•๋“ค์€ ๊ฐ๊ฐ์˜ ๋„คํŠธ์›Œํฌ์˜ ํƒ€๊ฒŸ์„ ์ถ”์ •ํ•˜๋Š”๋ฐ ์žˆ์–ด์„œ ๊ฒฐ์ •๋ก ์ ์ธ ์ถ”์ • ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•œ๋‹ค. ์ด๋Š” ์ถ”์ •์น˜์˜ ๋ถˆํ™•์ •์„ฑ ๋ฌธ์ œ ํ˜น์€ ์‹ ๋ขฐ๋„ ๋ฌธ์ œ๋ฅผ ์•ผ๊ธฐํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ์ ์„ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ์ œ์•ˆํ•˜๋Š” UAT ๊ธฐ๋ฒ•์€ ํ™•๋ฅ ๋ก ์ ์ธ ๋ณ€ํ™” ์ถ”์ •์„ ํ•™์Šตํ•˜๊ณ  ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ ๋ชจ๋ธ์ธ ๋ณ€ํ™” ์˜คํ† ์ธ์ฝ”๋” (variational autoencoder, VAE) ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•œ๋‹ค. UAT๋Š” ์™œ๊ณก๋œ ์Œ์„ฑ ํŠน์ง• ๋ฒกํ„ฐ์™€ ์Œ์†Œ ํƒ€๊ฒŸ๊ณผ์˜ ๊ด€๊ณ„๋ฅผ ๋งค๊ฐœํ•˜๋Š” ๊ฐ•์ธํ•œ ์€๋‹‰ ๋ณ€์ˆ˜๋ฅผ ๊นจ๋—ํ•œ ์Œ์„ฑ ํŠน์ง• ๋ฒกํ„ฐ ์ถ”์ •์น˜์˜ ๋ถ„ํฌ ์ •๋ณด๋ฅผ ์ด์šฉํ•˜์—ฌ ๋ชจ๋ธ๋งํ•œ๋‹ค. UAT์˜ ์€๋‹‰ ๋ณ€์ˆ˜๋“ค์€ ๋”ฅ ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์Œํ–ฅ ๋ชจ๋ธ์— ์ตœ์ ํ™”๋œ uncertainty decoding (UD) ํ”„๋ ˆ์ž„์›Œํฌ๋กœ๋ถ€ํ„ฐ ์œ ๋„๋œ ์ตœ๋Œ€ ์šฐ๋„ ๊ธฐ์ค€์— ๋”ฐ๋ผ์„œ ํ•™์Šต๋œ๋‹ค. ์ œ์•ˆ๋œ ๊ธฐ๋ฒ•์€ Aurora-4 DB์™€ CHiME-4 DB์—์„œ ๊ธฐ์กด์˜ DNN ๊ธฐ๋ฐ˜ ๊ธฐ๋ฒ•๋“ค์„ ํฌ๊ฒŒ ๋›ฐ์–ด๋„˜๋Š” ์„ฑ๋Šฅ์„ ๋ณด์˜€๋‹ค.In this thesis, we propose three acoustic modeling techniques for robust automatic speech recognition (ASR). Firstly, we propose a DNN-based acoustic modeling technique which makes the best use of the inherent noise-robustness of DNN is proposed. By applying this technique, the DNN can automatically learn the complicated relationship among the noisy, clean speech and noise estimate to phonetic target smoothly. The proposed method outperformed noise-aware training (NAT), i.e., the conventional auxiliary-feature-based model adaptation technique in Aurora-5 DB. The second method is multi-channel feature enhancement technique. In the general multi-channel speech recognition scenario, the enhanced single speech signal source is extracted from the multiple inputs using beamforming, i.e., the conventional signal-processing-based technique and the speech recognition process is performed by feeding that source into the acoustic model. We propose the multi-channel feature enhancement DNN algorithm by properly combining the delay-and-sum (DS) beamformer, which is one of the conventional beamforming techniques and DNN. Through the experiments using multichannel wall street journal audio visual (MC-WSJ-AV) corpus, it has been shown that the proposed method outperformed the conventional multi-channel feature enhancement techniques. Finally, uncertainty-aware training (UAT) technique is proposed. The most of the existing DNN-based techniques including the techniques introduced above, aim to optimize the point estimates of the targets (e.g., clean features, and acoustic model parameters). This tampers with the reliability of the estimates. In order to overcome this issue, UAT employs a modified structure of variational autoencoder (VAE), a neural network model which learns and performs stochastic variational inference (VIF). UAT models the robust latent variables which intervene the mapping between the noisy observed features and the phonetic target using the distributive information of the clean feature estimates. The proposed technique outperforms the conventional DNN-based techniques on Aurora-4 and CHiME-4 databases.Abstract i Contents iv List of Figures ix List of Tables xiii 1 Introduction 1 2 Background 9 2.1 Deep Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Experimental Database . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.1 Aurora-4 DB . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.2 Aurora-5 DB . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.3 MC-WSJ-AV DB . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.4 CHiME-4 DB . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3 Two-stage Noise-aware Training for Environment-robust Speech Recognition 25 iii 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 Noise-aware Training . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.3 Two-stage NAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.3.1 Lower DNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.3.2 Upper DNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.3.3 Joint Training . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.4.1 GMM-HMM System . . . . . . . . . . . . . . . . . . . . . . . 37 3.4.2 Training and Structures of DNN-based Techniques . . . . . . 37 3.4.3 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . 40 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4 DNN-based Feature Enhancement for Robust Multichannel Speech Recognition 45 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2 Observation Model in Multi-Channel Reverberant Noisy Environment 49 4.3 Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.3.1 Lower DNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.3.2 Upper DNN and Joint Training . . . . . . . . . . . . . . . . . 54 4.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.4.1 Recognition System and Feature Extraction . . . . . . . . . . 56 4.4.2 Training and Structures of DNN-based Techniques . . . . . . 58 4.4.3 Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.4.4 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . 62 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 iv 5 Uncertainty-aware Training for DNN-HMM System using Varia- tional Inference 67 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.2 Uncertainty Decoding for Noise Robustness . . . . . . . . . . . . . . 72 5.3 Variational Autoencoder . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.4 VIF-based uncertainty-aware Training . . . . . . . . . . . . . . . . . 83 5.4.1 Clean Uncertainty Network . . . . . . . . . . . . . . . . . . . 91 5.4.2 Environment Uncertainty Network . . . . . . . . . . . . . . . 93 5.4.3 Prediction Network and Joint Training . . . . . . . . . . . . . 95 5.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5.5.1 Experimental Setup: Feature Extraction and ASR System . . 96 5.5.2 Network Structures . . . . . . . . . . . . . . . . . . . . . . . . 98 5.5.3 Eects of CUN on the Noise Robustness . . . . . . . . . . . . 104 5.5.4 Uncertainty Representation in Dierent SNR Condition . . . 105 5.5.5 Result of Speech Recognition . . . . . . . . . . . . . . . . . . 112 5.5.6 Result of Speech Recognition with LSTM-HMM . . . . . . . 114 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 6 Conclusions 127 Bibliography 131 ์š”์•ฝ 145Docto

    AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES

    Get PDF
    Auditory stream denotes the abstract effect a source creates in the mind of the listener. An auditory scene consists of many streams, which the listener uses to analyze and understand the environment. Computer analyses that attempt to mimic human analysis of a scene must first perform Audio Scene Segmentation (ASS). ASS find applications in surveillance, automatic speech recognition and human computer interfaces. Microphone arrays can be employed for extracting streams corresponding to spatially separated sources. However, when a source moves to a new location during a period of silence, such a system loses track of the source. This results in multiple spatially localized streams for the same source. This thesis proposes to identify local streams associated with the same source using auditory features extracted from the beamformed signal. ASS using the spatial cues is first performed. Then auditory features are extracted and segments are linked together based on similarity of the feature vector. An experiment was carried out with two simultaneous speakers. A classifier is used to classify the localized streams as belonging to one speaker or the other. The best performance was achieved when pitch appended with Gammatone Frequency Cepstral Coefficeints (GFCC) was used as the feature vector. An accuracy of 96.2% was achieved

    Robust overlapping speech recognition based on neural networks

    Get PDF
    We address issues for improving hands-free speech recognition performance in the presence of multiple simultaneous speakers using multiple distant microphones. In this paper, a log spectral mapping is proposed to estimate the log mel-filterbank outputs of clean speech from multiple noisy speech using neural networks. Both the mapping of the far-field speech and combination of the enhanced speech and the estimated interfering speech are investigated. Our neural network based feature enhancement method incorporates the noise information and can be viewed as a non-linear log spectral subtraction. Experimental studies on MONC corpus showed that MLP-based mapping techniques yields a improvement in the recognition accuracy for the overlapping speech

    Machine Learning and Signal Processing Design for Edge Acoustic Applications

    Get PDF

    Machine Learning and Signal Processing Design for Edge Acoustic Applications

    Get PDF

    Neural Network based Regression for Robust Overlapping Speech Recognition using Microphone Arrays

    Get PDF
    This paper investigates a neural network based acoustic feature mapping to extract robust features for automatic speech recognition (ASR) of overlapping speech. In our preliminary studies, we trained neural networks to learn the mapping from log mel filter bank energies (MFBEs) extracted from the distant microphone recordings, including multiple overlapping speakers, to log MFBEs extracted from the clean speech signal. In this paper, we explore the mapping of higher order mel-filterbank cepstral coefficients (MFCC) to lower order coefficients. We also investigate the mapping of features from both target and interfering distant sound sources to the clean target features. This is achieved by using the microphone array to extract features from both the direction of the target and interfering sound sources. We demonstrate the effectiveness of the proposed approach through extensive evaluations on the MONC corpus, which includes both non-overlapping single speaker and overlapping multi-speaker conditions

    COMPARISON METRICS AND PERFORMANCE ESTIMATIONS FOR DEEP BEAMFORMING DEEP NEURAL NETWORK BASED AUTOMATIC SPEECH RECOGNITION SYSTEMS USING MICROPHONE-ARRAYS

    Get PDF
    Automatic Speech Recognition (ASR) functionality, the automatic translation of speech into text, is on the rise today and is required for various use-cases, scenarios, and applications. An ASR engine by itself faces difficulties when encountering live input of audio data, regardless of how sophisticated and advanced it may be. That is especially true, under the circumstances such as a noisy ambient environment, multiple speakers, or faulty microphones. These kinds of challenges characterize a realistic scenario for an ASR system. ASR functionality continues to evolve toward more comprehensive End-to-End (E2E) solutions. E2E solution development focuses on three significant characteristics. The solution has to be robust enough to show endurance against external interferences. Also, it has to maintain flexibility, so it can easily extend in expectation of adapting to new scenarios or in order to achieve better performance. Lastly, we expect the solution to be modular enough to fit into new applications conveniently. Such an E2E ASR solution may include several additional micro-modules of speech enhancements besides the ASR engine, which is very complicated by itself. Adding these micro-modules can enhance the robustness and improve the overall systemโ€™s performance. Examples of such possible micro-modules include noise cancellation and speech separation, multi-microphone arrays, and adaptive beamformer(s). Being a comprehensive solution built of numerous micro-modules is technologically challenging to implement and challenging to integrate into resource-limited mobile systems. By offloading the complex computations to a server on the cloud, the system can fit more easily in less capable computing devices. Nevertheless, that compute offloading comes with the cost of giving up on real-time analysis, and increasing the overall system bandwidth. In addition, offloading to a server must have connectivity to the cloud over the internet. To find the optimal trade-offs between performance, Hardware (HW) and Software (SW) requirements or limitations, maximal computation time allowed for real-time analysis, and the detection accuracy, one should first define the different metrics used for the evaluation of such an E2E ASR system. Secondly, one needs to determine the extent of correlation between those metrics, plus the ability to forecast the impact each variation has on the others. This research presents novel progress in optimally designing a robust E2E-ASR system targeted for mobile, resource-limited devices. First, we describe evaluation metrics for each domain of interest, spread over vast engineering subjects. Here, we emphasize any bindings between the metrics across domains and the degree of impact derived from a change in the systemโ€™s specifications or constraints. Second, we present the effectiveness of applying machine learning techniques that can generalize and provide results of improved overall performance and robustness. Third, we present an approach of substituting architectures, changing algorithms, and approximating complex computations by utilizing a custom dedicated hardware acceleration in order to replace the traditional state-of-the-art SW-based solutions, thus providing real-time analysis capabilities to resource-limited systems

    Recurrent neural networks for multi-microphone speech separation

    Get PDF
    This thesis takes the classical signal processing problem of separating the speech of a target speaker from a real-world audio recording containing noise, background interference โ€” from competing speech or other non-speech sources โ€”, and reverberation, and seeks data-driven solutions based on supervised learning methods, particularly recurrent neural networks (RNNs). Such speech separation methods can inject robustness in automatic speech recognition (ASR) systems and have been an active area of research for the past two decades. We particularly focus on applications where multi-channel recordings are available. Stand-alone beamformers cannot simultaneously suppress diffuse-noise and protect the desired signal from any distortions. Post-filters complement the beamformers in obtaining the minimum mean squared error (MMSE) estimate of the desired signal. Time-frequency (TF) masking โ€” a method having roots in computational auditory scene analysis (CASA) โ€” is a suitable candidate for post-filtering, but the challenge lies in estimating the TF masks. The use of RNNs โ€” in particular the bi-directional long short-term memory (BLSTM) architecture โ€” as a post-filter estimating TF masks for a delay-and-sum beamformer (DSB) โ€” using magnitude spectral and phase-based features โ€” is proposed. The dataโ€”recorded in 4 challenging realistic environmentsโ€”from the CHiME-3 challenge is used. Two different TF masks โ€” Wiener filter and log-ratio โ€” are identified as suitable targets for learning. The separated speech is evaluated based on objective speech intelligibility measures: short-term objective intelligibility (STOI) and frequency-weighted segmental SNR (fwSNR). The word error rates (WERs) as reported by the previous state-of-the-art ASR back-end โ€” when fed with the test data of the CHiME-3 challenge โ€” are interpreted against the objective scores for understanding the relationships of the latter with the former. Overall, a consistent improvement in the objective scores brought in by the RNNs is observed compared to that of feed-forward neural networks and a baseline MVDR beamformer
    • โ€ฆ
    corecore