1,264 research outputs found

    HiFi-GAN: High-Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks

    Full text link
    Real-world audio recordings are often degraded by factors such as noise, reverberation, and equalization distortion. This paper introduces HiFi-GAN, a deep learning method to transform recorded speech to sound as though it had been recorded in a studio. We use an end-to-end feed-forward WaveNet architecture, trained with multi-scale adversarial discriminators in both the time domain and the time-frequency domain. It relies on the deep feature matching losses of the discriminators to improve the perceptual quality of enhanced speech. The proposed model generalizes well to new speakers, new speech content, and new environments. It significantly outperforms state-of-the-art baseline methods in both objective and subjective experiments.Comment: Accepted by INTERSPEECH 202

    잡음에 강인한 음성 구간 검출과 음성 향상을 위한 딥 러닝 기반 기법 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 김남수.Over the past decades, a number of approaches have been proposed to improve the performances of voice activity detection (VAD) and speech enhancement algorithms which are crucial for speech communication and speech signal processing systems. In particular, the increasing use of machine learning-based techniques has led to the more robust algorithms in low SNR conditions. Among them, the deep neural network (DNN) has been one of the most popular techniques. While the DNN-based technique is successfully applied to these tasks, the characteristics of VAD and speech enhancement tasks are not fully incorporated to the DNN structures and objective functions. In this thesis, we propose the novel training schemes and post-filter for DNN-based VAD and speech enhancement. Unlike algorithms with basic DNN-based framework, the proposed algorithm combines the knowledge from signal processing and machine learning society to develop the improve DNN-based VAD and speech enhancement algorithm. In the following chapters, the environmental mismatch problem in the VAD area is compensated by applying multi-task learning to the DNN-based VAD. Also, the DNN-based framework is proposed in the speech enhancement scenario and the novel objective function and post-filter which are derived from the characteristics on human auditory perception improve the DNN-based speech enhancement algorithm. In the VAD task, the DNN-based algorithm was recently proposed and outperformed the traditional and other machine learning-based VAD algorithms. However, the performance of the DNN-based algorithm sometimes deteriorates when the training and test environments are not matched with each other. In order to increase the performance of the DNN-based VAD in unseen environments, we adopt the multi-task learning (MTL) framework which consists of the primary VAD and subsidiary feature enhancement tasks. By employing the MTL framework, the DNN learns the denoising function in the shared hidden layers that is useful to maintain the VAD performance in mismatched noise conditions. Second, the DNN-based framework is applied to the speech enhancement by considering it as a regression task. The encoding vector of the conventional nonnegative matrix factorization (NMF)-based algorithm is estimated by the proposed DNN and the performance of the DNN-based algorithm is compared to the conventional NMF-based algorithm. Third, the perceptually motivated objective function is proposed for the DNN-based speech enhancement. In the proposed technique, a new objective function which consists of the Mel-scale weighted mean square error, temporal and spectral variations similarities between the enhanced and clean speech is employed in the DNN training stage. The proposed objective function helps to compute the gradients based on a perceptually motivated non-linear frequency scale and alleviates the over-smoothness of the estimated speech. Furthermore, the post-filter which adjusts the variance over frequency bins further compensates the lack of contrasts between spectral peaks and valleys in the enhanced speech. The conventional GV equalization post-filters do not consider the spectral dynamics over frequency bins. To consider the contrast between spectral peaks and valleys in each enhanced speech frames, the proposed algorithm matches the variance over coefficients in the log-power spectra domain. Finally, in the speech enhancement task, an integrated technique using the proposed perceptually motivated objective function and the post-filter is described. In matched and mismatched noise conditions, the performance results of the conventional and proposed algorithm are discussed. Also, the subjective preference test result of these algorithms is also provided.1 Introduction 1 2 Conventional Approaches for Speech Enhancement 7 2.1 NMF-Based Speech Enhancement 7 3 Deep Neural Networks 13 3.1 Introduction 13 3.2 Objective Function 14 3.3 Stochastic Gradient Descent 16 4 DNN-Based Voiced Activity Detection with Multi-Task Learning Framework 19 4.1 Introduction 19 4.2 DNN-Based VAD Algorithm 21 4.3 DNN-Based VAD with MTL framework 23 4.4 Experimental Results 26 4.4.1 Experiments in Matched Noise Conditions 26 4.4.2 Experiments in Mismatched Noise Conditions 28 4.5 Summary 30 5 NMF-based Speech Enhancement Using Deep Neural Network 35 5.1 Introduction 35 5.2 Encoding Vector Estimation Using DNN 37 5.3 Experiments 42 5.4 Summary 47 6 DNN-Based Monaural Speech Enhancement with Temporal and Spectral Variations Equalization 49 6.1 Introduction 49 6.2 Conventional DNN-Based Speech Enhancement 53 6.2.1 Training Stage 53 6.2.2 Test Stage 55 6.3 Perceptually-Motivated Criteria 56 6.3.1 Perceptually Motivated Objective Function 56 6.3.2 Mel-Scale Weighted Mean Square Error 58 6.3.3 Temporal Variation Similarity 58 6.3.4 Spectral Variation Similarity 61 6.3.5 DNN Training with the Proposed Objective Function 62 6.4 Experiments 62 6.4.1 Performance Evaluation with Varying Weight Parameters 64 6.4.2 Performance Evaluation in Matched Noise Conditions 64 6.4.3 Performance Evaluation in Mismatched Noise Conditions 66 6.4.4 Comparison Between Variation Analysis Method 66 6.4.5 Subjective Test Results 67 6.5 Summary 68 7 Spectral Variance Equalization Post-filter for DNN-Based Speech Enhancement 75 7.1 Introduction 75 7.2 GV Equalization Post-Filter 76 7.3 Spectral Variance(SV) Equalization Post-Filter 77 7.4 Experiments 78 7.4.1 Objective Test Results 78 7.4.2 Subjective Test Results 79 7.5 Summary 81 8 Conclusions 83 Bibliography 85 Appendix 95 요약 97Docto

    Environmentally robust ASR front-end for deep neural network acoustic models

    Get PDF
    This paper examines the individual and combined impacts of various front-end approaches on the performance of deep neural network (DNN) based speech recognition systems in distant talking situations, where acoustic environmental distortion degrades the recognition performance. Training of a DNN-based acoustic model consists of generation of state alignments followed by learning the network parameters. This paper first shows that the network parameters are more sensitive to the speech quality than the alignments and thus this stage requires improvement. Then, various front-end robustness approaches to addressing this problem are categorised based on functionality. The degree to which each class of approaches impacts the performance of DNN-based acoustic models is examined experimentally. Based on the results, a front-end processing pipeline is proposed for efficiently combining different classes of approaches. Using this front-end, the combined effects of different classes of approaches are further evaluated in a single distant microphone-based meeting transcription task with both speaker independent (SI) and speaker adaptive training (SAT) set-ups. By combining multiple speech enhancement results, multiple types of features, and feature transformation, the front-end shows relative performance gains of 7.24% and 9.83% in the SI and SAT scenarios, respectively, over competitive DNN-based systems using log mel-filter bank features.This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.csl.2014.11.00

    Microphone Array Speech Enhancement Via Beamforming Based Deep Learning Network

    Get PDF
    In general, in-car speech enhancement is an application of the microphone array speech enhancement in particular acoustic environments. Speech enhancement inside the moving cars is always an interesting topic and the researchers work to create some modules to increase the quality of speech and intelligibility of speech in cars. The passenger dialogue inside the car, the sound of other equipment, and a wide range of interference effects are major challenges in the task of speech separation in-car environment. To overcome this issue, a novel Beamforming based Deep learning Network (Bf-DLN) has been proposed for speech enhancement. Initially, the captured microphone array signals are pre-processed using an Adaptive beamforming technique named Least Constrained Minimum Variance (LCMV). Consequently, the proposed method uses a time-frequency representation to transform the pre-processed data into an image. The smoothed pseudo-Wigner-Ville distribution (SPWVD) is used for converting time-domain speech inputs into images. Convolutional deep belief network (CDBN) is used to extract the most pertinent features from these transformed images. Enhanced Elephant Heard Algorithm (EEHA) is used for selecting the desired source by eliminating the interference source. The experimental result demonstrates the effectiveness of the proposed strategy in removing background noise from the original speech signal. The proposed strategy outperforms existing methods in terms of PESQ, STOI, SSNRI, and SNR. The PESQ of the proposed Bf-DLN has a maximum PESQ of 1.98, whereas existing models like Two-stage Bi-LSTM has 1.82, DNN-C has 1.75 and GCN has 1.68 respectively. The PESQ of the proposed method is 1.75%, 3.15%, and 4.22% better than the existing GCN, DNN-C, and Bi-LSTM techniques. The efficacy of the proposed method is then validated by experiments

    An Exercise and Sports Equipment Recognition System

    Get PDF
    Most mobile health management applications today require manual input or use sensors like the accelerometer or GPS to record user data. The onboard camera remains underused. We propose an Exercise and Sports Equipment Recognition System (ESRS) that can recognize physical activity equipment from raw image data. This system can be integrated with mobile phones to allow the camera to become a primary input device for recording physical activity. We employ a deep convolutional neural network to train models capable of recognizing 14 different equipment categories. Furthermore, we propose a preprocessing scheme that uses color normalization and denoising techniques to improve recognition accuracy. Our best model is able to achieve a a top-3 accuracy of 83.3% on the test dataset. We demonstrate that our model improves upon GoogLeNet for this dataset, the state-of-the-art network which won the ILSVRC 2014 challenge. Our work is extendable as improving the quality and size of the training dataset can further boost predictive accuracy

    Music Source Separation Using Deep Neural Networks

    Get PDF
    Last years, Sound Source Separation (SSS) has been one of the most active fields within signal processing. The design of such algorithms seeks to recreate the human ability to identify individual sound sources. In the music field, efforts are being made to isolate the main instruments from a single audio file with a mixture of stereo audio. The goal of these algorithms is to extract multiple audio files with specific instruments, such as bass, voice or drums. This project focuses on analyzing the existing systems based on neural networks and their performance. In addition, it goes deeply into the Open-Unmix algorithm structure and tries to improve its results.En los últimos años, la Separación de Fuentes Sonoras (SSS) ha sido uno de los campos más activos dentro del procesado de señal. El diseño de estos algoritmos intenta recrear la habilidad humana de identificar fuentes sonoras individuales. En el campo de la música, se trabaja para aislar los principales instrumentos de un único fichero con una mezcla de audio estéreo. Así pues, el objetivo de estos algoritmos es obtener varios archivos de audio con instrumentos concretos, como el bajo, la voz o la batería. Este trabajo se centra en analizar las propuestas existentes de sistemas basados en las redes neuronales y su rendimiento. Además, estudia a fondo la estructura propuesta en el algoritmo Open-Unmix y trata de mejorar sus resultados
    corecore