67 research outputs found

    Adaptive Filtered-x Algorithms for Room Equalization Based on Block-Based Combination Schemes

    Full text link
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.[EN] Room equalization has become essential for sound reproduction systems to provide the listener with the desired acoustical sensation. Recently, adaptive filters have been proposed as an effective tool in the core of these systems. In this context, this paper introduces different novel schemes based on the combination of adaptive filters idea: a versatile and flexible approach that permits obtaining adaptive schemes combining the capabilities of several independent adaptive filters. In this way, we have investigated the advantages of a scheme called combination of block-based adaptive filters which allows a blockwise combination splitting the adaptive filters into nonoverlapping blocks. This idea was previously applied to the plant identification problem, but has to be properly modified to obtain a suitable behavior in the equalization application. Moreover, we propose a scheme with the aim of further improving the equalization performance using the a priori knowledge of the energy distribution of the optimal inverse filter, where the block filters are chosen to fit with the coefficients energy distribution. Furthermore, the biased block-based filter is also introduced as a particular case of the combination scheme, especially suited for low signal-to-noise ratios (SNRs) or sparse scenarios. Although the combined schemes can be employed with any kind of adaptive filter, we employ the filtered-x improved proportionate normalized least mean square algorithm as basis of the proposed algorithms, allowing to introduce a novel combination scheme based on partitioned block schemes where different blocks of the adaptive filter use different parameter settings. Several experiments are included to evaluate the proposed algorithms in terms of convergence speed and steady-state behavior for different degrees of sparseness and SNRs.The work of L. A. Azpicueta-Ruiz was supported in part by the Comtmidad de Madrid through CASI-CAM-CM under Grant S2013/ICE-2845, in part by the Spanish Ministry of Economy and Competitiveness through DAMA under Grant TIN2015-70308-REDT, and Grant TEC2014-52289-R, and in part by the European Union. The work of L. Fuster, M. Ferrer, and M. de Diego was supported in part by EU together with the Spanish Government under Grant TEC2015-67387-C4-1-R (MINECO/FEDER), and in part by the Cieneralitat Valenciana under Grant PROMETEOII/2014/003. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Simon Dodo.Fuster Criado, L.; Diego Antón, MD.; Azpicueta-Ruiz, LA.; Ferrer Contreras, M. (2016). Adaptive Filtered-x Algorithms for Room Equalization Based on Block-Based Combination Schemes. IEEE/ACM Transactions on Audio, Speech and Language Processing. 24(10):1732-1745. https://doi.org/10.1109/TASLP.2016.2583065S17321745241

    Sparseness-controlled adaptive algorithms for supervised and unsupervised system identification

    No full text
    In single-channel hands-free telephony, the acoustic coupling between the loudspeaker and the microphone can be strong and this generates echoes that can degrade user experience. Therefore, effective acoustic echo cancellation (AEC) is necessary to maintain a stable system and hence improve the perceived voice quality of a call. Traditionally, adaptive filters have been deployed in acoustic echo cancellers to estimate the acoustic impulse responses (AIRs) using adaptive algorithms. The performances of a range of well-known algorithms are studied in the context of both AEC and network echo cancellation (NEC). It presents insights into their tracking performances under both time-invariant and time-varying system conditions. In the context of AEC, the level of sparseness in AIRs can vary greatly in a mobile environment. When the response is strongly sparse, convergence of conventional approaches is poor. Drawing on techniques originally developed for NEC, a class of time-domain and a frequency-domain AEC algorithms are proposed that can not only work well in both sparse and dispersive circumstances, but also adapt dynamically to the level of sparseness using a new sparseness-controlled approach. As it will be shown later that the early part of the acoustic echo path is sparse while the late reverberant part of the acoustic path is dispersive, a novel approach to an adaptive filter structure that consists of two time-domain partition blocks is proposed such that different adaptive algorithms can be used for each part. By properly controlling the mixing parameter for the partitioned blocks separately, where the block lengths are controlled adaptively, the proposed partitioned block algorithm works well in both sparse and dispersive time-varying circumstances. A new insight into an analysis on the tracking performance of improved proportionate NLMS (IPNLMS) is presented by deriving the expression for the mean-square error. By employing the framework for both sparse and dispersive time-varying echo paths, this work validates the analytic results in practical simulations for AEC. The time-domain second-order statistic based blind SIMO identification algorithms, which exploit the cross relation method, are investigated and then a technique with proportionate step-size control for both sparse and dispersive system identification is also developed

    Linear and nonlinear room compensation of audio rendering systems

    Full text link
    [EN] Common audio systems are designed with the intent of creating real and immersive scenarios that allow the user to experience a particular acoustic sensation that does not depend on the room he is perceiving the sound. However, acoustic devices and multichannel rendering systems working inside a room, can impair the global audio effect and thus the 3D spatial sound. In order to preserve the spatial sound characteristics of multichannel rendering techniques, adaptive filtering schemes are presented in this dissertation to compensate these electroacoustic effects and to achieve the immersive sensation of the desired acoustic system. Adaptive filtering offers a solution to the room equalization problem that is doubly interesting. First of all, it iteratively solves the room inversion problem, which can become computationally complex to obtain when direct methods are used. Secondly, the use of adaptive filters allows to follow the time-varying room conditions. In this regard, adaptive equalization (AE) filters try to cancel the echoes due to the room effects. In this work, we consider this problem and propose effective and robust linear schemes to solve this equalization problem by using adaptive filters. To do this, different adaptive filtering schemes are introduced in the AE context. These filtering schemes are based on three strategies previously introduced in the literature: the convex combination of filters, the biasing of the filter weights and the block-based filtering. More specifically, and motivated by the sparse nature of the acoustic impulse response and its corresponding optimal inverse filter, we introduce different adaptive equalization algorithms. In addition, since audio immersive systems usually require the use of multiple transducers, the multichannel adaptive equalization problem should be also taken into account when new single-channel approaches are presented, in the sense that they can be straightforwardly extended to the multichannel case. On the other hand, when dealing with audio devices, consideration must be given to the nonlinearities of the system in order to properly equalize the electroacoustic system. For that purpose, we propose a novel nonlinear filtered-x approach to compensate both room reverberation and nonlinear distortion with memory caused by the amplifier and loudspeaker devices. Finally, it is important to validate the algorithms proposed in a real-time implementation. Thus, some initial research results demonstrate that an adaptive equalizer can be used to compensate room distortions.[ES] Los sistemas de audio actuales están diseñados con la idea de crear escenarios reales e inmersivos que permitan al usuario experimentar determinadas sensaciones acústicas que no dependan de la sala o situación donde se esté percibiendo el sonido. Sin embargo, los dispositivos acústicos y los sistemas multicanal funcionando dentro de salas, pueden perjudicar el efecto global sonoro y de esta forma, el sonido espacial 3D. Para poder preservar las características espaciales sonoras de los sistemas de reproducción multicanal, en esta tesis se presentan los esquemas de filtrado adaptativo para compensar dichos efectos electroacústicos y conseguir la sensación inmersiva del sistema sonoro deseado. El filtrado adaptativo ofrece una solución al problema de salas que es interesante por dos motivos. Por un lado, resuelve de forma iterativa el problema de inversión de salas, que puede llegar a ser computacionalmente costoso para los métodos de inversión directos existentes. Por otro lado, el uso de filtros adaptativos permite seguir las variaciones cambiantes de los efectos de la sala de escucha. A este respecto, los filtros de ecualización adaptativa (AE) intentan cancelar los ecos introducidos por la sala de escucha. En esta tesis se considera este problema y se proponen esquemas lineales efectivos y robustos para resolver el problema de ecualización mediante filtros adaptativos. Para conseguirlo, se introducen diferentes esquemas de filtrado adaptativo para AE. Estos esquemas de filtrado se basan en tres estrategias ya usadas en la literatura: la combinación convexa de filtros, el sesgado de los coeficientes del filtro y el filtrado basado en bloques. Más especificamente y motivado por la naturaleza dispersiva de las respuestas al impulso acústicas y de sus correspondientes filtros inversos óptimos, se presentan diversos algoritmos adaptativos de ecualización específicos. Además, ya que los sistemas de audio inmersivos requieren usar normalmente múltiples trasductores, se debe considerar también el problema de ecualización multicanal adaptativa cuando se diseñan nuevas estrategias de filtrado adaptativo para sistemas monocanal, ya que éstas deben ser fácilmente extrapolables al caso multicanal. Por otro lado, cuando se utilizan dispositivos acústicos, se debe considerar la existencia de no linearidades en el sistema elactroacústico, para poder ecualizarlo correctamente. Por este motivo, se propone un nuevo modelo no lineal de filtrado-x que compense a la vez la reverberación introducida por la sala y la distorsión no lineal con memoria provocada por el amplificador y el altavoz. Por último, es importante validar los algoritmos propuestos mediante implementaciones en tiempo real, para asegurarnos que pueden realizarse. Para ello, se presentan algunos resultados experimentales iniciales que muestran la idoneidad de la ecualización adaptativa en problemas de compensación de salas.[CA] Els sistemes d'àudio actuals es dissenyen amb l'objectiu de crear ambients reals i immersius que permeten a l'usuari experimentar una sensació acústica particular que no depèn de la sala on està percebent el so. No obstant això, els dispositius acústics i els sistemes de renderització multicanal treballant dins d'una sala poden arribar a modificar l'efecte global de l'àudio i per tant, l'efecte 3D del so a l'espai. Amb l'objectiu de conservar les característiques espacials del so obtingut amb tècniques de renderització multicanal, aquesta tesi doctoral presenta esquemes de filtrat adaptatiu per a compensar aquests efectes electroacústics i aconseguir una sensació immersiva del sistema acústic desitjat. El filtrat adaptatiu presenta una solució al problema d'equalització de sales que es interessant baix dos punts de vista. Per una banda, el filtrat adaptatiu resol de forma iterativa el problema inversió de sales, que pot arribar a ser molt complexe computacionalment quan s'utilitzen mètodes directes. Per altra banda, l'ús de filtres adaptatius permet fer un seguiment de les condicions canviants de la sala amb el temps. Més concretament, els filtres d'equalització adaptatius (EA) intenten cancel·lar els ecos produïts per la sala. A aquesta tesi, considerem aquest problema i proposem esquemes lineals efectius i robustos per a resoldre aquest problema d'equalització mitjançant filtres adaptatius. Per aconseguir-ho, diferent esquemes de filtrat adaptatiu es presenten dins del context del problema d'EA. Aquests esquemes de filtrat es basen en tres estratègies ja presentades a l'estat de l'art: la combinació convexa de filtres, el sesgat dels pesos del filtre i el filtrat basat en blocs. Més concretament, i motivat per la naturalesa dispersa de la resposta a l'impuls acústica i el corresponent filtre òptim invers, presentem diferents algorismes d'equalització adaptativa. A més a més, com que els sistemes d'àudio immersiu normalment requereixen l'ús de múltiples transductors, cal considerar també el problema d'equalització adaptativa multicanal quan es presenten noves solucions de canal simple, ja que aquestes s'han de poder estendre fàcilment al cas multicanal. Un altre aspecte a considerar quan es treballa amb dispositius d'àudio és el de les no linealitats del sistema a l'hora d'equalitzar correctament el sistema electroacústic. Amb aquest objectiu, a aquesta tesi es proposa una nova tècnica basada en filtrat-x no lineal, per a compensar tant la reverberació de la sala com la distorsió no lineal amb memòria introduïda per l'amplificador i els altaveus. Per últim, és important validar la implementació en temps real dels algorismes proposats. Amb aquest objectiu, alguns resultats inicials demostren la idoneïtat de l'equalització adaptativa en problemes de compensació de sales.Fuster Criado, L. (2015). Linear and nonlinear room compensation of audio rendering systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/5945

    System Identification with Applications in Speech Enhancement

    No full text
    As the increasing popularity of integrating hands-free telephony on mobile portable devices and the rapid development of voice over internet protocol, identification of acoustic systems has become desirable for compensating distortions introduced to speech signals during transmission, and hence enhancing the speech quality. The objective of this research is to develop system identification algorithms for speech enhancement applications including network echo cancellation and speech dereverberation. A supervised adaptive algorithm for sparse system identification is developed for network echo cancellation. Based on the framework of selective-tap updating scheme on the normalized least mean squares algorithm, the MMax and sparse partial update tap-selection strategies are exploited in the frequency domain to achieve fast convergence performance with low computational complexity. Through demonstrating how the sparseness of the network impulse response varies in the transformed domain, the multidelay filtering structure is incorporated to reduce the algorithmic delay. Blind identification of SIMO acoustic systems for speech dereverberation in the presence of common zeros is then investigated. First, the problem of common zeros is defined and extended to include the presence of near-common zeros. Two clustering algorithms are developed to quantify the number of these zeros so as to facilitate the study of their effect on blind system identification and speech dereverberation. To mitigate such effect, two algorithms are developed where the two-stage algorithm based on channel decomposition identifies common and non-common zeros sequentially; and the forced spectral diversity approach combines spectral shaping filters and channel undermodelling for deriving a modified system that leads to an improved dereverberation performance. Additionally, a solution to the scale factor ambiguity problem in subband-based blind system identification is developed, which motivates further research on subbandbased dereverberation techniques. Comprehensive simulations and discussions demonstrate the effectiveness of the aforementioned algorithms. A discussion on possible directions of prospective research on system identification techniques concludes this thesis

    An investigation of the utility of monaural sound source separation via nonnegative matrix factorization applied to acoustic echo and reverberation mitigation for hands-free telephony

    Get PDF
    In this thesis we investigate the applicability and utility of Monaural Sound Source Separation (MSSS) via Nonnegative Matrix Factorization (NMF) for various problems related to audio for hands-free telephony. We first investigate MSSS via NMF as an alternative acoustic echo reduction approach to existing approaches such as Acoustic Echo Cancellation (AEC). To this end, we present the single-channel acoustic echo problem as an MSSS problem, in which the objective is to extract the users signal from a mixture also containing acoustic echo and noise. To perform separation, NMF is used to decompose the near-end microphone signal onto the union of two nonnegative bases in the magnitude Short Time Fourier Transform domain. One of these bases is for the spectral energy of the acoustic echo signal, and is formed from the in- coming far-end user’s speech, while the other basis is for the spectral energy of the near-end speaker, and is trained with speech data a priori. In comparison to AEC, the speaker extraction approach obviates Double-Talk Detection (DTD), and is demonstrated to attain its maximal echo mitigation performance immediately upon initiation and to maintain that performance during and after room changes for similar computational requirements. Speaker extraction is also shown to introduce distortion of the near-end speech signal during double-talk, which is quantified by means of a speech distortion measure and compared to that of AEC. Subsequently, we address Double-Talk Detection (DTD) for block-based AEC algorithms. We propose a novel block-based DTD algorithm that uses the available signals and the estimate of the echo signal that is produced by NMF-based speaker extraction to compute a suitably normalized correlation-based decision variable, which is compared to a fixed threshold to decide on doubletalk. Using a standard evaluation technique, the proposed algorithm is shown to have comparable detection performance to an existing conventional block-based DTD algorithm. It is also demonstrated to inherit the room change insensitivity of speaker extraction, with the proposed DTD algorithm generating minimal false doubletalk indications upon initiation and in response to room changes in comparison to the existing conventional DTD. We also show that this property allows its paired AEC to converge at a rate close to the optimum. Another focus of this thesis is the problem of inverting a single measurement of a non- minimum phase Room Impulse Response (RIR). We describe the process by which percep- tually detrimental all-pass phase distortion arises in reverberant speech filtered by the inverse of the minimum phase component of the RIR; in short, such distortion arises from inverting the magnitude response of the high-Q maximum phase zeros of the RIR. We then propose two novel partial inversion schemes that precisely mitigate this distortion. One of these schemes employs NMF-based MSSS to separate the all-pass phase distortion from the target speech in the magnitude STFT domain, while the other approach modifies the inverse minimum phase filter such that the magnitude response of the maximum phase zeros of the RIR is not fully compensated. Subjective listening tests reveal that the proposed schemes generally produce better quality output speech than a comparable inversion technique

    An investigation of the utility of monaural sound source separation via nonnegative matrix factorization applied to acoustic echo and reverberation mitigation for hands-free telephony

    Get PDF
    In this thesis we investigate the applicability and utility of Monaural Sound Source Separation (MSSS) via Nonnegative Matrix Factorization (NMF) for various problems related to audio for hands-free telephony. We first investigate MSSS via NMF as an alternative acoustic echo reduction approach to existing approaches such as Acoustic Echo Cancellation (AEC). To this end, we present the single-channel acoustic echo problem as an MSSS problem, in which the objective is to extract the users signal from a mixture also containing acoustic echo and noise. To perform separation, NMF is used to decompose the near-end microphone signal onto the union of two nonnegative bases in the magnitude Short Time Fourier Transform domain. One of these bases is for the spectral energy of the acoustic echo signal, and is formed from the in- coming far-end user’s speech, while the other basis is for the spectral energy of the near-end speaker, and is trained with speech data a priori. In comparison to AEC, the speaker extraction approach obviates Double-Talk Detection (DTD), and is demonstrated to attain its maximal echo mitigation performance immediately upon initiation and to maintain that performance during and after room changes for similar computational requirements. Speaker extraction is also shown to introduce distortion of the near-end speech signal during double-talk, which is quantified by means of a speech distortion measure and compared to that of AEC. Subsequently, we address Double-Talk Detection (DTD) for block-based AEC algorithms. We propose a novel block-based DTD algorithm that uses the available signals and the estimate of the echo signal that is produced by NMF-based speaker extraction to compute a suitably normalized correlation-based decision variable, which is compared to a fixed threshold to decide on doubletalk. Using a standard evaluation technique, the proposed algorithm is shown to have comparable detection performance to an existing conventional block-based DTD algorithm. It is also demonstrated to inherit the room change insensitivity of speaker extraction, with the proposed DTD algorithm generating minimal false doubletalk indications upon initiation and in response to room changes in comparison to the existing conventional DTD. We also show that this property allows its paired AEC to converge at a rate close to the optimum. Another focus of this thesis is the problem of inverting a single measurement of a non- minimum phase Room Impulse Response (RIR). We describe the process by which percep- tually detrimental all-pass phase distortion arises in reverberant speech filtered by the inverse of the minimum phase component of the RIR; in short, such distortion arises from inverting the magnitude response of the high-Q maximum phase zeros of the RIR. We then propose two novel partial inversion schemes that precisely mitigate this distortion. One of these schemes employs NMF-based MSSS to separate the all-pass phase distortion from the target speech in the magnitude STFT domain, while the other approach modifies the inverse minimum phase filter such that the magnitude response of the maximum phase zeros of the RIR is not fully compensated. Subjective listening tests reveal that the proposed schemes generally produce better quality output speech than a comparable inversion technique

    Convolutive Blind Source Separation Methods

    Get PDF
    In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks

    Adaptive Algorithms for Intelligent Acoustic Interfaces

    Get PDF
    Modern speech communications are evolving towards a new direction which involves users in a more perceptive way. That is the immersive experience, which may be considered as the “last-mile” problem of telecommunications. One of the main feature of immersive communications is the distant-talking, i.e. the hands-free (in the broad sense) speech communications without bodyworn or tethered microphones that takes place in a multisource environment where interfering signals may degrade the communication quality and the intelligibility of the desired speech source. In order to preserve speech quality intelligent acoustic interfaces may be used. An intelligent acoustic interface may comprise multiple microphones and loudspeakers and its peculiarity is to model the acoustic channel in order to adapt to user requirements and to environment conditions. This is the reason why intelligent acoustic interfaces are based on adaptive filtering algorithms. The acoustic path modelling entails a set of problems which have to be taken into account in designing an adaptive filtering algorithm. Such problems may be basically generated by a linear or a nonlinear process and can be tackled respectively by linear or nonlinear adaptive algorithms. In this work we consider such modelling problems and we propose novel effective adaptive algorithms that allow acoustic interfaces to be robust against any interfering signals, thus preserving the perceived quality of desired speech signals. As regards linear adaptive algorithms, a class of adaptive filters based on the sparse nature of the acoustic impulse response has been recently proposed. We adopt such class of adaptive filters, named proportionate adaptive filters, and derive a general framework from which it is possible to derive any linear adaptive algorithm. Using such framework we also propose some efficient proportionate adaptive algorithms, expressly designed to tackle problems of a linear nature. On the other side, in order to address problems deriving from a nonlinear process, we propose a novel filtering model which performs a nonlinear transformations by means of functional links. Using such nonlinear model, we propose functional link adaptive filters which provide an efficient solution to the modelling of a nonlinear acoustic channel. Finally, we introduce robust filtering architectures based on adaptive combinations of filters that allow acoustic interfaces to more effectively adapt to environment conditions, thus providing a powerful mean to immersive speech communications

    Adaptive Algorithms for Intelligent Acoustic Interfaces

    Get PDF
    Modern speech communications are evolving towards a new direction which involves users in a more perceptive way. That is the immersive experience, which may be considered as the “last-mile” problem of telecommunications. One of the main feature of immersive communications is the distant-talking, i.e. the hands-free (in the broad sense) speech communications without bodyworn or tethered microphones that takes place in a multisource environment where interfering signals may degrade the communication quality and the intelligibility of the desired speech source. In order to preserve speech quality intelligent acoustic interfaces may be used. An intelligent acoustic interface may comprise multiple microphones and loudspeakers and its peculiarity is to model the acoustic channel in order to adapt to user requirements and to environment conditions. This is the reason why intelligent acoustic interfaces are based on adaptive filtering algorithms. The acoustic path modelling entails a set of problems which have to be taken into account in designing an adaptive filtering algorithm. Such problems may be basically generated by a linear or a nonlinear process and can be tackled respectively by linear or nonlinear adaptive algorithms. In this work we consider such modelling problems and we propose novel effective adaptive algorithms that allow acoustic interfaces to be robust against any interfering signals, thus preserving the perceived quality of desired speech signals. As regards linear adaptive algorithms, a class of adaptive filters based on the sparse nature of the acoustic impulse response has been recently proposed. We adopt such class of adaptive filters, named proportionate adaptive filters, and derive a general framework from which it is possible to derive any linear adaptive algorithm. Using such framework we also propose some efficient proportionate adaptive algorithms, expressly designed to tackle problems of a linear nature. On the other side, in order to address problems deriving from a nonlinear process, we propose a novel filtering model which performs a nonlinear transformations by means of functional links. Using such nonlinear model, we propose functional link adaptive filters which provide an efficient solution to the modelling of a nonlinear acoustic channel. Finally, we introduce robust filtering architectures based on adaptive combinations of filters that allow acoustic interfaces to more effectively adapt to environment conditions, thus providing a powerful mean to immersive speech communications
    corecore