13 research outputs found

    An investigation of the utility of monaural sound source separation via nonnegative matrix factorization applied to acoustic echo and reverberation mitigation for hands-free telephony

    Get PDF
    In this thesis we investigate the applicability and utility of Monaural Sound Source Separation (MSSS) via Nonnegative Matrix Factorization (NMF) for various problems related to audio for hands-free telephony. We first investigate MSSS via NMF as an alternative acoustic echo reduction approach to existing approaches such as Acoustic Echo Cancellation (AEC). To this end, we present the single-channel acoustic echo problem as an MSSS problem, in which the objective is to extract the users signal from a mixture also containing acoustic echo and noise. To perform separation, NMF is used to decompose the near-end microphone signal onto the union of two nonnegative bases in the magnitude Short Time Fourier Transform domain. One of these bases is for the spectral energy of the acoustic echo signal, and is formed from the in- coming far-end user’s speech, while the other basis is for the spectral energy of the near-end speaker, and is trained with speech data a priori. In comparison to AEC, the speaker extraction approach obviates Double-Talk Detection (DTD), and is demonstrated to attain its maximal echo mitigation performance immediately upon initiation and to maintain that performance during and after room changes for similar computational requirements. Speaker extraction is also shown to introduce distortion of the near-end speech signal during double-talk, which is quantified by means of a speech distortion measure and compared to that of AEC. Subsequently, we address Double-Talk Detection (DTD) for block-based AEC algorithms. We propose a novel block-based DTD algorithm that uses the available signals and the estimate of the echo signal that is produced by NMF-based speaker extraction to compute a suitably normalized correlation-based decision variable, which is compared to a fixed threshold to decide on doubletalk. Using a standard evaluation technique, the proposed algorithm is shown to have comparable detection performance to an existing conventional block-based DTD algorithm. It is also demonstrated to inherit the room change insensitivity of speaker extraction, with the proposed DTD algorithm generating minimal false doubletalk indications upon initiation and in response to room changes in comparison to the existing conventional DTD. We also show that this property allows its paired AEC to converge at a rate close to the optimum. Another focus of this thesis is the problem of inverting a single measurement of a non- minimum phase Room Impulse Response (RIR). We describe the process by which percep- tually detrimental all-pass phase distortion arises in reverberant speech filtered by the inverse of the minimum phase component of the RIR; in short, such distortion arises from inverting the magnitude response of the high-Q maximum phase zeros of the RIR. We then propose two novel partial inversion schemes that precisely mitigate this distortion. One of these schemes employs NMF-based MSSS to separate the all-pass phase distortion from the target speech in the magnitude STFT domain, while the other approach modifies the inverse minimum phase filter such that the magnitude response of the maximum phase zeros of the RIR is not fully compensated. Subjective listening tests reveal that the proposed schemes generally produce better quality output speech than a comparable inversion technique

    An investigation of the utility of monaural sound source separation via nonnegative matrix factorization applied to acoustic echo and reverberation mitigation for hands-free telephony

    Get PDF
    In this thesis we investigate the applicability and utility of Monaural Sound Source Separation (MSSS) via Nonnegative Matrix Factorization (NMF) for various problems related to audio for hands-free telephony. We first investigate MSSS via NMF as an alternative acoustic echo reduction approach to existing approaches such as Acoustic Echo Cancellation (AEC). To this end, we present the single-channel acoustic echo problem as an MSSS problem, in which the objective is to extract the users signal from a mixture also containing acoustic echo and noise. To perform separation, NMF is used to decompose the near-end microphone signal onto the union of two nonnegative bases in the magnitude Short Time Fourier Transform domain. One of these bases is for the spectral energy of the acoustic echo signal, and is formed from the in- coming far-end user’s speech, while the other basis is for the spectral energy of the near-end speaker, and is trained with speech data a priori. In comparison to AEC, the speaker extraction approach obviates Double-Talk Detection (DTD), and is demonstrated to attain its maximal echo mitigation performance immediately upon initiation and to maintain that performance during and after room changes for similar computational requirements. Speaker extraction is also shown to introduce distortion of the near-end speech signal during double-talk, which is quantified by means of a speech distortion measure and compared to that of AEC. Subsequently, we address Double-Talk Detection (DTD) for block-based AEC algorithms. We propose a novel block-based DTD algorithm that uses the available signals and the estimate of the echo signal that is produced by NMF-based speaker extraction to compute a suitably normalized correlation-based decision variable, which is compared to a fixed threshold to decide on doubletalk. Using a standard evaluation technique, the proposed algorithm is shown to have comparable detection performance to an existing conventional block-based DTD algorithm. It is also demonstrated to inherit the room change insensitivity of speaker extraction, with the proposed DTD algorithm generating minimal false doubletalk indications upon initiation and in response to room changes in comparison to the existing conventional DTD. We also show that this property allows its paired AEC to converge at a rate close to the optimum. Another focus of this thesis is the problem of inverting a single measurement of a non- minimum phase Room Impulse Response (RIR). We describe the process by which percep- tually detrimental all-pass phase distortion arises in reverberant speech filtered by the inverse of the minimum phase component of the RIR; in short, such distortion arises from inverting the magnitude response of the high-Q maximum phase zeros of the RIR. We then propose two novel partial inversion schemes that precisely mitigate this distortion. One of these schemes employs NMF-based MSSS to separate the all-pass phase distortion from the target speech in the magnitude STFT domain, while the other approach modifies the inverse minimum phase filter such that the magnitude response of the maximum phase zeros of the RIR is not fully compensated. Subjective listening tests reveal that the proposed schemes generally produce better quality output speech than a comparable inversion technique

    Linear and nonlinear room compensation of audio rendering systems

    Full text link
    [EN] Common audio systems are designed with the intent of creating real and immersive scenarios that allow the user to experience a particular acoustic sensation that does not depend on the room he is perceiving the sound. However, acoustic devices and multichannel rendering systems working inside a room, can impair the global audio effect and thus the 3D spatial sound. In order to preserve the spatial sound characteristics of multichannel rendering techniques, adaptive filtering schemes are presented in this dissertation to compensate these electroacoustic effects and to achieve the immersive sensation of the desired acoustic system. Adaptive filtering offers a solution to the room equalization problem that is doubly interesting. First of all, it iteratively solves the room inversion problem, which can become computationally complex to obtain when direct methods are used. Secondly, the use of adaptive filters allows to follow the time-varying room conditions. In this regard, adaptive equalization (AE) filters try to cancel the echoes due to the room effects. In this work, we consider this problem and propose effective and robust linear schemes to solve this equalization problem by using adaptive filters. To do this, different adaptive filtering schemes are introduced in the AE context. These filtering schemes are based on three strategies previously introduced in the literature: the convex combination of filters, the biasing of the filter weights and the block-based filtering. More specifically, and motivated by the sparse nature of the acoustic impulse response and its corresponding optimal inverse filter, we introduce different adaptive equalization algorithms. In addition, since audio immersive systems usually require the use of multiple transducers, the multichannel adaptive equalization problem should be also taken into account when new single-channel approaches are presented, in the sense that they can be straightforwardly extended to the multichannel case. On the other hand, when dealing with audio devices, consideration must be given to the nonlinearities of the system in order to properly equalize the electroacoustic system. For that purpose, we propose a novel nonlinear filtered-x approach to compensate both room reverberation and nonlinear distortion with memory caused by the amplifier and loudspeaker devices. Finally, it is important to validate the algorithms proposed in a real-time implementation. Thus, some initial research results demonstrate that an adaptive equalizer can be used to compensate room distortions.[ES] Los sistemas de audio actuales están diseñados con la idea de crear escenarios reales e inmersivos que permitan al usuario experimentar determinadas sensaciones acústicas que no dependan de la sala o situación donde se esté percibiendo el sonido. Sin embargo, los dispositivos acústicos y los sistemas multicanal funcionando dentro de salas, pueden perjudicar el efecto global sonoro y de esta forma, el sonido espacial 3D. Para poder preservar las características espaciales sonoras de los sistemas de reproducción multicanal, en esta tesis se presentan los esquemas de filtrado adaptativo para compensar dichos efectos electroacústicos y conseguir la sensación inmersiva del sistema sonoro deseado. El filtrado adaptativo ofrece una solución al problema de salas que es interesante por dos motivos. Por un lado, resuelve de forma iterativa el problema de inversión de salas, que puede llegar a ser computacionalmente costoso para los métodos de inversión directos existentes. Por otro lado, el uso de filtros adaptativos permite seguir las variaciones cambiantes de los efectos de la sala de escucha. A este respecto, los filtros de ecualización adaptativa (AE) intentan cancelar los ecos introducidos por la sala de escucha. En esta tesis se considera este problema y se proponen esquemas lineales efectivos y robustos para resolver el problema de ecualización mediante filtros adaptativos. Para conseguirlo, se introducen diferentes esquemas de filtrado adaptativo para AE. Estos esquemas de filtrado se basan en tres estrategias ya usadas en la literatura: la combinación convexa de filtros, el sesgado de los coeficientes del filtro y el filtrado basado en bloques. Más especificamente y motivado por la naturaleza dispersiva de las respuestas al impulso acústicas y de sus correspondientes filtros inversos óptimos, se presentan diversos algoritmos adaptativos de ecualización específicos. Además, ya que los sistemas de audio inmersivos requieren usar normalmente múltiples trasductores, se debe considerar también el problema de ecualización multicanal adaptativa cuando se diseñan nuevas estrategias de filtrado adaptativo para sistemas monocanal, ya que éstas deben ser fácilmente extrapolables al caso multicanal. Por otro lado, cuando se utilizan dispositivos acústicos, se debe considerar la existencia de no linearidades en el sistema elactroacústico, para poder ecualizarlo correctamente. Por este motivo, se propone un nuevo modelo no lineal de filtrado-x que compense a la vez la reverberación introducida por la sala y la distorsión no lineal con memoria provocada por el amplificador y el altavoz. Por último, es importante validar los algoritmos propuestos mediante implementaciones en tiempo real, para asegurarnos que pueden realizarse. Para ello, se presentan algunos resultados experimentales iniciales que muestran la idoneidad de la ecualización adaptativa en problemas de compensación de salas.[CA] Els sistemes d'àudio actuals es dissenyen amb l'objectiu de crear ambients reals i immersius que permeten a l'usuari experimentar una sensació acústica particular que no depèn de la sala on està percebent el so. No obstant això, els dispositius acústics i els sistemes de renderització multicanal treballant dins d'una sala poden arribar a modificar l'efecte global de l'àudio i per tant, l'efecte 3D del so a l'espai. Amb l'objectiu de conservar les característiques espacials del so obtingut amb tècniques de renderització multicanal, aquesta tesi doctoral presenta esquemes de filtrat adaptatiu per a compensar aquests efectes electroacústics i aconseguir una sensació immersiva del sistema acústic desitjat. El filtrat adaptatiu presenta una solució al problema d'equalització de sales que es interessant baix dos punts de vista. Per una banda, el filtrat adaptatiu resol de forma iterativa el problema inversió de sales, que pot arribar a ser molt complexe computacionalment quan s'utilitzen mètodes directes. Per altra banda, l'ús de filtres adaptatius permet fer un seguiment de les condicions canviants de la sala amb el temps. Més concretament, els filtres d'equalització adaptatius (EA) intenten cancel·lar els ecos produïts per la sala. A aquesta tesi, considerem aquest problema i proposem esquemes lineals efectius i robustos per a resoldre aquest problema d'equalització mitjançant filtres adaptatius. Per aconseguir-ho, diferent esquemes de filtrat adaptatiu es presenten dins del context del problema d'EA. Aquests esquemes de filtrat es basen en tres estratègies ja presentades a l'estat de l'art: la combinació convexa de filtres, el sesgat dels pesos del filtre i el filtrat basat en blocs. Més concretament, i motivat per la naturalesa dispersa de la resposta a l'impuls acústica i el corresponent filtre òptim invers, presentem diferents algorismes d'equalització adaptativa. A més a més, com que els sistemes d'àudio immersiu normalment requereixen l'ús de múltiples transductors, cal considerar també el problema d'equalització adaptativa multicanal quan es presenten noves solucions de canal simple, ja que aquestes s'han de poder estendre fàcilment al cas multicanal. Un altre aspecte a considerar quan es treballa amb dispositius d'àudio és el de les no linealitats del sistema a l'hora d'equalitzar correctament el sistema electroacústic. Amb aquest objectiu, a aquesta tesi es proposa una nova tècnica basada en filtrat-x no lineal, per a compensar tant la reverberació de la sala com la distorsió no lineal amb memòria introduïda per l'amplificador i els altaveus. Per últim, és important validar la implementació en temps real dels algorismes proposats. Amb aquest objectiu, alguns resultats inicials demostren la idoneïtat de l'equalització adaptativa en problemes de compensació de sales.Fuster Criado, L. (2015). Linear and nonlinear room compensation of audio rendering systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/5945

    Sparse Nonlinear MIMO Filtering and Identification

    Get PDF
    In this chapter system identification algorithms for sparse nonlinear multi input multi output (MIMO) systems are developed. These algorithms are potentially useful in a variety of application areas including digital transmission systems incorporating power amplifier(s) along with multiple antennas, cognitive processing, adaptive control of nonlinear multivariable systems, and multivariable biological systems. Sparsity is a key constraint imposed on the model. The presence of sparsity is often dictated by physical considerations as in wireless fading channel-estimation. In other cases it appears as a pragmatic modelling approach that seeks to cope with the curse of dimensionality, particularly acute in nonlinear systems like Volterra type series. Three dentification approaches are discussed: conventional identification based on both input and output samples, semi–blind identification placing emphasis on minimal input resources and blind identification whereby only output samples are available plus a–priori information on input characteristics. Based on this taxonomy a variety of algorithms, existing and new, are studied and evaluated by simulation

    System approach to robust acoustic echo cancellation through semi-blind source separation based on independent component analysis

    Get PDF
    We live in a dynamic world full of noises and interferences. The conventional acoustic echo cancellation (AEC) framework based on the least mean square (LMS) algorithm by itself lacks the ability to handle many secondary signals that interfere with the adaptive filtering process, e.g., local speech and background noise. In this dissertation, we build a foundation for what we refer to as the system approach to signal enhancement as we focus on the AEC problem. We first propose the residual echo enhancement (REE) technique that utilizes the error recovery nonlinearity (ERN) to "enhances" the filter estimation error prior to the filter adaptation. The single-channel AEC problem can be viewed as a special case of semi-blind source separation (SBSS) where one of the source signals is partially known, i.e., the far-end microphone signal that generates the near-end acoustic echo. SBSS optimized via independent component analysis (ICA) leads to the system combination of the LMS algorithm with the ERN that allows for continuous and stable adaptation even during double talk. Second, we extend the system perspective to the decorrelation problem for AEC, where we show that the REE procedure can be applied effectively in a multi-channel AEC (MCAEC) setting to indirectly assist the recovery of lost AEC performance due to inter-channel correlation, known generally as the "non-uniqueness" problem. We develop a novel, computationally efficient technique of frequency-domain resampling (FDR) that effectively alleviates the non-uniqueness problem directly while introducing minimal distortion to signal quality and statistics. We also apply the system approach to the multi-delay filter (MDF) that suffers from the inter-block correlation problem. Finally, we generalize the MCAEC problem in the SBSS framework and discuss many issues related to the implementation of an SBSS system. We propose a constrained batch-online implementation of SBSS that stabilizes the convergence behavior even in the worst case scenario of a single far-end talker along with the non-uniqueness condition on the far-end mixing system. The proposed techniques are developed from a pragmatic standpoint, motivated by real-world problems in acoustic and audio signal processing. Generalization of the orthogonality principle to the system level of an AEC problem allows us to relate AEC to source separation that seeks to maximize the independence, hence implicitly the orthogonality, not only between the error signal and the far-end signal, but rather, among all signals involved. The system approach, for which the REE paradigm is just one realization, enables the encompassing of many traditional signal enhancement techniques in analytically consistent yet practically effective manner for solving the enhancement problem in a very noisy and disruptive acoustic mixing environment.PhDCommittee Chair: Biing-Hwang Juang; Committee Member: Brani Vidakovic; Committee Member: David V. Anderson; Committee Member: Jeff S. Shamma; Committee Member: Xiaoli M
    corecore