43 research outputs found

    Multichannel Speech Separation and Enhancement Using the Convolutive Transfer Function

    Get PDF
    This paper addresses the problem of speech separation and enhancement from multichannel convolutive and noisy mixtures, \emph{assuming known mixing filters}. We propose to perform the speech separation and enhancement task in the short-time Fourier transform domain, using the convolutive transfer function (CTF) approximation. Compared to time-domain filters, CTF has much less taps, consequently it has less near-common zeros among channels and less computational complexity. The work proposes three speech-source recovery methods, namely: i) the multichannel inverse filtering method, i.e. the multiple input/output inverse theorem (MINT), is exploited in the CTF domain, and for the multi-source case, ii) a beamforming-like multichannel inverse filtering method applying single source MINT and using power minimization, which is suitable whenever the source CTFs are not all known, and iii) a constrained Lasso method, where the sources are recovered by minimizing the 1\ell_1-norm to impose their spectral sparsity, with the constraint that the 2\ell_2-norm fitting cost, between the microphone signals and the mixing model involving the unknown source signals, is less than a tolerance. The noise can be reduced by setting a tolerance onto the noise power. Experiments under various acoustic conditions are carried out to evaluate the three proposed methods. The comparison between them as well as with the baseline methods is presented.Comment: Submitted to IEEE/ACM Transactions on Audio, Speech and Language Processin

    Blind MultiChannel Identification and Equalization for Dereverberation and Noise Reduction based on Convolutive Transfer Function

    Get PDF
    This paper addresses the problems of blind channel identification and multichannel equalization for speech dereverberation and noise reduction. The time-domain cross-relation method is not suitable for blind room impulse response identification, due to the near-common zeros of the long impulse responses. We extend the cross-relation method to the short-time Fourier transform (STFT) domain, in which the time-domain impulse responses are approximately represented by the convolutive transfer functions (CTFs) with much less coefficients. The CTFs suffer from the common zeros caused by the oversampled STFT. We propose to identify CTFs based on the STFT with the oversampled signals and the critical sampled CTFs, which is a good compromise between the frequency aliasing of the signals and the common zeros problem of CTFs. In addition, a normalization of the CTFs is proposed to remove the gain ambiguity across sub-bands. In the STFT domain, the identified CTFs is used for multichannel equalization, in which the sparsity of speech signals is exploited. We propose to perform inverse filtering by minimizing the 1\ell_1-norm of the source signal with the relaxed 2\ell_2-norm fitting error between the micophone signals and the convolution of the estimated source signal and the CTFs used as a constraint. This method is advantageous in that the noise can be reduced by relaxing the 2\ell_2-norm to a tolerance corresponding to the noise power, and the tolerance can be automatically set. The experiments confirm the efficiency of the proposed method even under conditions with high reverberation levels and intense noise.Comment: 13 pages, 5 figures, 5 table

    Multichannel Online Dereverberation based on Spectral Magnitude Inverse Filtering

    Full text link
    This paper addresses the problem of multichannel online dereverberation. The proposed method is carried out in the short-time Fourier transform (STFT) domain, and for each frequency band independently. In the STFT domain, the time-domain room impulse response is approximately represented by the convolutive transfer function (CTF). The multichannel CTFs are adaptively identified based on the cross-relation method, and using the recursive least square criterion. Instead of the complex-valued CTF convolution model, we use a nonnegative convolution model between the STFT magnitude of the source signal and the CTF magnitude, which is just a coarse approximation of the former model, but is shown to be more robust against the CTF perturbations. Based on this nonnegative model, we propose an online STFT magnitude inverse filtering method. The inverse filters of the CTF magnitude are formulated based on the multiple-input/output inverse theorem (MINT), and adaptively estimated based on the gradient descent criterion. Finally, the inverse filtering is applied to the STFT magnitude of the microphone signals, obtaining an estimate of the STFT magnitude of the source signal. Experiments regarding both speech enhancement and automatic speech recognition are conducted, which demonstrate that the proposed method can effectively suppress reverberation, even for the difficult case of a moving speaker.Comment: Paper submitted to IEEE/ACM Transactions on Audio, Speech and Language Processing. IEEE Signal Processing Letters, 201

    A Structurally Regularized CNN Architecture via Adaptive Subband Decomposition

    Full text link
    We propose a generalized convolutional neural network (CNN) architecture that first decomposes the input signal into subbands by an adaptive filter bank structure, and then uses convolutional layers to extract features from each subband independently. Fully connected layers finally combine the extracted features to perform classification. The proposed architecture restrains each of the subband CNNs from learning using the entire input signal spectrum, resulting in structural regularization. Our proposed CNN architecture is fully compatible with the end-to-end learning mechanism of typical CNN architectures and learns the subband decomposition from the input dataset. We show that the proposed CNN architecture has attractive properties, such as robustness to input and weight-and-bias quantization noise, compared to regular full-band CNN architectures. Importantly, the proposed architecture significantly reduces computational costs, while maintaining state-of-the-art classification accuracy. Experiments on image classification tasks using the MNIST, CIFAR-10/100, Caltech-101, and ImageNet-2012 datasets show that the proposed architecture allows accuracy surpassing state-of-the-art results. On the ImageNet-2012 dataset, we achieved top-5 and top-1 validation set accuracy of 86.91% and 69.73%, respectively. Notably, the proposed architecture offers over 90% reduction in computation cost in the inference path and approximately 75% reduction in back-propagation (per iteration) with just a single-layer subband decomposition. With a 2-layer subband decomposition, the computational gains are even more significant with comparable accuracy results to the single-layer decomposition

    An investigation of the utility of monaural sound source separation via nonnegative matrix factorization applied to acoustic echo and reverberation mitigation for hands-free telephony

    Get PDF
    In this thesis we investigate the applicability and utility of Monaural Sound Source Separation (MSSS) via Nonnegative Matrix Factorization (NMF) for various problems related to audio for hands-free telephony. We first investigate MSSS via NMF as an alternative acoustic echo reduction approach to existing approaches such as Acoustic Echo Cancellation (AEC). To this end, we present the single-channel acoustic echo problem as an MSSS problem, in which the objective is to extract the users signal from a mixture also containing acoustic echo and noise. To perform separation, NMF is used to decompose the near-end microphone signal onto the union of two nonnegative bases in the magnitude Short Time Fourier Transform domain. One of these bases is for the spectral energy of the acoustic echo signal, and is formed from the in- coming far-end user’s speech, while the other basis is for the spectral energy of the near-end speaker, and is trained with speech data a priori. In comparison to AEC, the speaker extraction approach obviates Double-Talk Detection (DTD), and is demonstrated to attain its maximal echo mitigation performance immediately upon initiation and to maintain that performance during and after room changes for similar computational requirements. Speaker extraction is also shown to introduce distortion of the near-end speech signal during double-talk, which is quantified by means of a speech distortion measure and compared to that of AEC. Subsequently, we address Double-Talk Detection (DTD) for block-based AEC algorithms. We propose a novel block-based DTD algorithm that uses the available signals and the estimate of the echo signal that is produced by NMF-based speaker extraction to compute a suitably normalized correlation-based decision variable, which is compared to a fixed threshold to decide on doubletalk. Using a standard evaluation technique, the proposed algorithm is shown to have comparable detection performance to an existing conventional block-based DTD algorithm. It is also demonstrated to inherit the room change insensitivity of speaker extraction, with the proposed DTD algorithm generating minimal false doubletalk indications upon initiation and in response to room changes in comparison to the existing conventional DTD. We also show that this property allows its paired AEC to converge at a rate close to the optimum. Another focus of this thesis is the problem of inverting a single measurement of a non- minimum phase Room Impulse Response (RIR). We describe the process by which percep- tually detrimental all-pass phase distortion arises in reverberant speech filtered by the inverse of the minimum phase component of the RIR; in short, such distortion arises from inverting the magnitude response of the high-Q maximum phase zeros of the RIR. We then propose two novel partial inversion schemes that precisely mitigate this distortion. One of these schemes employs NMF-based MSSS to separate the all-pass phase distortion from the target speech in the magnitude STFT domain, while the other approach modifies the inverse minimum phase filter such that the magnitude response of the maximum phase zeros of the RIR is not fully compensated. Subjective listening tests reveal that the proposed schemes generally produce better quality output speech than a comparable inversion technique

    An investigation of the utility of monaural sound source separation via nonnegative matrix factorization applied to acoustic echo and reverberation mitigation for hands-free telephony

    Get PDF
    In this thesis we investigate the applicability and utility of Monaural Sound Source Separation (MSSS) via Nonnegative Matrix Factorization (NMF) for various problems related to audio for hands-free telephony. We first investigate MSSS via NMF as an alternative acoustic echo reduction approach to existing approaches such as Acoustic Echo Cancellation (AEC). To this end, we present the single-channel acoustic echo problem as an MSSS problem, in which the objective is to extract the users signal from a mixture also containing acoustic echo and noise. To perform separation, NMF is used to decompose the near-end microphone signal onto the union of two nonnegative bases in the magnitude Short Time Fourier Transform domain. One of these bases is for the spectral energy of the acoustic echo signal, and is formed from the in- coming far-end user’s speech, while the other basis is for the spectral energy of the near-end speaker, and is trained with speech data a priori. In comparison to AEC, the speaker extraction approach obviates Double-Talk Detection (DTD), and is demonstrated to attain its maximal echo mitigation performance immediately upon initiation and to maintain that performance during and after room changes for similar computational requirements. Speaker extraction is also shown to introduce distortion of the near-end speech signal during double-talk, which is quantified by means of a speech distortion measure and compared to that of AEC. Subsequently, we address Double-Talk Detection (DTD) for block-based AEC algorithms. We propose a novel block-based DTD algorithm that uses the available signals and the estimate of the echo signal that is produced by NMF-based speaker extraction to compute a suitably normalized correlation-based decision variable, which is compared to a fixed threshold to decide on doubletalk. Using a standard evaluation technique, the proposed algorithm is shown to have comparable detection performance to an existing conventional block-based DTD algorithm. It is also demonstrated to inherit the room change insensitivity of speaker extraction, with the proposed DTD algorithm generating minimal false doubletalk indications upon initiation and in response to room changes in comparison to the existing conventional DTD. We also show that this property allows its paired AEC to converge at a rate close to the optimum. Another focus of this thesis is the problem of inverting a single measurement of a non- minimum phase Room Impulse Response (RIR). We describe the process by which percep- tually detrimental all-pass phase distortion arises in reverberant speech filtered by the inverse of the minimum phase component of the RIR; in short, such distortion arises from inverting the magnitude response of the high-Q maximum phase zeros of the RIR. We then propose two novel partial inversion schemes that precisely mitigate this distortion. One of these schemes employs NMF-based MSSS to separate the all-pass phase distortion from the target speech in the magnitude STFT domain, while the other approach modifies the inverse minimum phase filter such that the magnitude response of the maximum phase zeros of the RIR is not fully compensated. Subjective listening tests reveal that the proposed schemes generally produce better quality output speech than a comparable inversion technique

    Adaptive Filtered-x Algorithms for Room Equalization Based on Block-Based Combination Schemes

    Full text link
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.[EN] Room equalization has become essential for sound reproduction systems to provide the listener with the desired acoustical sensation. Recently, adaptive filters have been proposed as an effective tool in the core of these systems. In this context, this paper introduces different novel schemes based on the combination of adaptive filters idea: a versatile and flexible approach that permits obtaining adaptive schemes combining the capabilities of several independent adaptive filters. In this way, we have investigated the advantages of a scheme called combination of block-based adaptive filters which allows a blockwise combination splitting the adaptive filters into nonoverlapping blocks. This idea was previously applied to the plant identification problem, but has to be properly modified to obtain a suitable behavior in the equalization application. Moreover, we propose a scheme with the aim of further improving the equalization performance using the a priori knowledge of the energy distribution of the optimal inverse filter, where the block filters are chosen to fit with the coefficients energy distribution. Furthermore, the biased block-based filter is also introduced as a particular case of the combination scheme, especially suited for low signal-to-noise ratios (SNRs) or sparse scenarios. Although the combined schemes can be employed with any kind of adaptive filter, we employ the filtered-x improved proportionate normalized least mean square algorithm as basis of the proposed algorithms, allowing to introduce a novel combination scheme based on partitioned block schemes where different blocks of the adaptive filter use different parameter settings. Several experiments are included to evaluate the proposed algorithms in terms of convergence speed and steady-state behavior for different degrees of sparseness and SNRs.The work of L. A. Azpicueta-Ruiz was supported in part by the Comtmidad de Madrid through CASI-CAM-CM under Grant S2013/ICE-2845, in part by the Spanish Ministry of Economy and Competitiveness through DAMA under Grant TIN2015-70308-REDT, and Grant TEC2014-52289-R, and in part by the European Union. The work of L. Fuster, M. Ferrer, and M. de Diego was supported in part by EU together with the Spanish Government under Grant TEC2015-67387-C4-1-R (MINECO/FEDER), and in part by the Cieneralitat Valenciana under Grant PROMETEOII/2014/003. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Simon Dodo.Fuster Criado, L.; Diego Antón, MD.; Azpicueta-Ruiz, LA.; Ferrer Contreras, M. (2016). Adaptive Filtered-x Algorithms for Room Equalization Based on Block-Based Combination Schemes. IEEE/ACM Transactions on Audio, Speech and Language Processing. 24(10):1732-1745. https://doi.org/10.1109/TASLP.2016.2583065S17321745241

    Robust speaker recognition using both vocal source and vocal tract features estimated from noisy input utterances.

    Get PDF
    Wang, Ning.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 106-115).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Introduction to Speech and Speaker Recognition --- p.1Chapter 1.2 --- Difficulties and Challenges of Speaker Authentication --- p.6Chapter 1.3 --- Objectives and Thesis Outline --- p.7Chapter 2 --- Speaker Recognition System --- p.10Chapter 2.1 --- Baseline Speaker Recognition System Overview --- p.10Chapter 2.1.1 --- Feature Extraction --- p.12Chapter 2.1.2 --- Pattern Generation and Classification --- p.24Chapter 2.2 --- Performance Evaluation Metric for Different Speaker Recognition Tasks --- p.30Chapter 2.3 --- Robustness of Speaker Recognition System --- p.30Chapter 2.3.1 --- Speech Corpus: CU2C --- p.30Chapter 2.3.2 --- Noise Database: NOISEX-92 --- p.34Chapter 2.3.3 --- Mismatched Training and Testing Conditions --- p.35Chapter 2.4 --- Summary --- p.37Chapter 3 --- Speaker Recognition System using both Vocal Tract and Vocal Source Features --- p.38Chapter 3.1 --- Speech Production Mechanism --- p.39Chapter 3.1.1 --- Speech Production: An Overview --- p.39Chapter 3.1.2 --- Acoustic Properties of Human Speech --- p.40Chapter 3.2 --- Source-filter Model and Linear Predictive Analysis --- p.44Chapter 3.2.1 --- Source-filter Speech Model --- p.44Chapter 3.2.2 --- Linear Predictive Analysis for Speech Signal --- p.46Chapter 3.3 --- Vocal Tract Features --- p.51Chapter 3.4 --- Vocal Source Features --- p.52Chapter 3.4.1 --- Source Related Features: An Overview --- p.52Chapter 3.4.2 --- Source Related Features: Technical Viewpoints --- p.54Chapter 3.5 --- Effects of Noises on Speech Properties --- p.55Chapter 3.6 --- Summary --- p.61Chapter 4 --- Estimation of Robust Acoustic Features for Speaker Discrimination --- p.62Chapter 4.1 --- Robust Speech Techniques --- p.63Chapter 4.1.1 --- Noise Resilience --- p.64Chapter 4.1.2 --- Speech Enhancement --- p.64Chapter 4.2 --- Spectral Subtractive-Type Preprocessing --- p.65Chapter 4.2.1 --- Noise Estimation --- p.66Chapter 4.2.2 --- Spectral Subtraction Algorithm --- p.66Chapter 4.3 --- LP Analysis of Noisy Speech --- p.67Chapter 4.3.1 --- LP Inverse Filtering: Whitening Process --- p.68Chapter 4.3.2 --- Magnitude Response of All-pole Filter in Noisy Condition --- p.70Chapter 4.3.3 --- Noise Spectral Reshaping --- p.72Chapter 4.4 --- Distinctive Vocal Tract and Vocal Source Feature Extraction . . --- p.73Chapter 4.4.1 --- Vocal Tract Feature Extraction --- p.73Chapter 4.4.2 --- Source Feature Generation Procedure --- p.75Chapter 4.4.3 --- Subband-specific Parameterization Method --- p.79Chapter 4.5 --- Summary --- p.87Chapter 5 --- Speaker Recognition Tasks & Performance Evaluation --- p.88Chapter 5.1 --- Speaker Recognition Experimental Setup --- p.89Chapter 5.1.1 --- Task Description --- p.89Chapter 5.1.2 --- Baseline Experiments --- p.90Chapter 5.1.3 --- Identification and Verification Results --- p.91Chapter 5.2 --- Speaker Recognition using Source-tract Features --- p.92Chapter 5.2.1 --- Source Feature Selection --- p.92Chapter 5.2.2 --- Source-tract Feature Fusion --- p.94Chapter 5.2.3 --- Identification and Verification Results --- p.95Chapter 5.3 --- Performance Analysis --- p.98Chapter 6 --- Conclusion --- p.102Chapter 6.1 --- Discussion and Conclusion --- p.102Chapter 6.2 --- Suggestion of Future Work --- p.10

    Sparsity Based Formulations For Dereverberation

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2016Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2016Konser, konferans, toplantı gibi ortamlarda kaydedilen akustik işaretler, kaydın alındığı ortam nedeni ile yankıya ve gürültüye maruz kalır. Kaynak işaretinin elde edilen gözlemlerden kestirimi yankı giderme problemi olarak isimlendirilir. Bu kayıtlarda göze çarpan yankı etkileri bir süzgeç olarak zaman tanım bölgesinde modellenebilir. Yankı etkilerini modelleyen bu süzgeç oda darbe cevabı olarak isimlendirilir. Oda darbe cevabının bilindiği durumda problem gözü kapalı olmayan yankı giderme problemine dönüşür. Tez boyunca oda darbe cevabının bilindiği durumlar dikkate alınmıştır. Gözlemlenebilir ki, oda darbe cevabı kaynak ve gözlem noktalarına çok bağımlıdır. Bu nedenle oda darbe cevabının bütün uzaydaki noktalar için kestirimi çok zordur. Bu durumda oda darbe cevapları tezdeki deneylerde sentetik olarak uygulanmış veya gözlem ortamında kayıt alındığı sırada gözlemden elde edilmişlerdir. Bölüm 5, bu duruma farklı bir açıdan bakılmasının örneğidir. Bu bölümde oda darbe cevabının kısmen bilindiği ve gözlem ortamı için tek bir süzgeç tanımlanabileceği durumları göz önüne alınmıştır.Acoustic signals recorded in concerts, meetings or conferences are effected by the room impulse response and noise. Estimating the clean source signals from the observations is referred as the dereverberation problem. If the room impulse responses are known, the problem is non-blind dereverberation problem. In this thesis non-blind dereverberation problem is posed using convex penalty functions, with a convex minimization procedure. The convex minimization problems are solved using iterative methods. Through the thesis sparse nature of the time frequency spectrum is referred. In order to transform the time domain signal to a time frequency spectrum Short Time Fourier Transform is used.Yüksek LisansM.Sc

    Theory and Design of Spatial Active Noise Control Systems

    No full text
    The concept of spatial active noise control is to use a number of loudspeakers to generate anti-noise sound waves, which would cancel the undesired acoustic noise over a spatial region. The acoustic noise hazards that exist in a variety of situations provide many potential applications for spatial ANC. However, using existing ANC techniques, it is difficult to achieve satisfying noise reduction for a spatial area, especially using a practical hardware setup. Therefore, this thesis explores various aspects of spatial ANC, and seeks to develop algorithms and techniques to promote the performance and feasibility of spatial ANC in real-life applications. We use the spherical harmonic analysis technique as the basis for our research in this work. This technique provides an accurate representation of the spatial noise field, and enables in-depth analysis of the characteristics of the noise field. Incorporating this technique into the design of spatial ANC systems, we developed a series of algorithms and methods that optimizes the spatial ANC systems, towards both improving noise reduction performance and reducing system complexity. Several contributions of this work are: (i) design of compact planar microphone array structures capable of recording 3D spatial sound fields, so that the noise field can be monitored with minimum physical intrusion to the quiet zone, (ii) derivation of a Direct-to-Reverberant Energy Ratio (DRR) estimation algorithm which can be used for evaluating reverberant characteristics of a noisy environment, (iii) propose a few methods to estimate and optimize spatial noise reduction of an ANC system, including a new metric for measuring spatial noise energy level, and (iv) design of an adaptive spatial ANC algorithm incorporating the spherical harmonic analysis technique. The combination of these contributions enables the design of compact, high performing spatial ANC systems for various applications
    corecore