314 research outputs found

    Blind MultiChannel Identification and Equalization for Dereverberation and Noise Reduction based on Convolutive Transfer Function

    Get PDF
    This paper addresses the problems of blind channel identification and multichannel equalization for speech dereverberation and noise reduction. The time-domain cross-relation method is not suitable for blind room impulse response identification, due to the near-common zeros of the long impulse responses. We extend the cross-relation method to the short-time Fourier transform (STFT) domain, in which the time-domain impulse responses are approximately represented by the convolutive transfer functions (CTFs) with much less coefficients. The CTFs suffer from the common zeros caused by the oversampled STFT. We propose to identify CTFs based on the STFT with the oversampled signals and the critical sampled CTFs, which is a good compromise between the frequency aliasing of the signals and the common zeros problem of CTFs. In addition, a normalization of the CTFs is proposed to remove the gain ambiguity across sub-bands. In the STFT domain, the identified CTFs is used for multichannel equalization, in which the sparsity of speech signals is exploited. We propose to perform inverse filtering by minimizing the 1\ell_1-norm of the source signal with the relaxed 2\ell_2-norm fitting error between the micophone signals and the convolution of the estimated source signal and the CTFs used as a constraint. This method is advantageous in that the noise can be reduced by relaxing the 2\ell_2-norm to a tolerance corresponding to the noise power, and the tolerance can be automatically set. The experiments confirm the efficiency of the proposed method even under conditions with high reverberation levels and intense noise.Comment: 13 pages, 5 figures, 5 table

    Multichannel Speech Separation and Enhancement Using the Convolutive Transfer Function

    Get PDF
    This paper addresses the problem of speech separation and enhancement from multichannel convolutive and noisy mixtures, \emph{assuming known mixing filters}. We propose to perform the speech separation and enhancement task in the short-time Fourier transform domain, using the convolutive transfer function (CTF) approximation. Compared to time-domain filters, CTF has much less taps, consequently it has less near-common zeros among channels and less computational complexity. The work proposes three speech-source recovery methods, namely: i) the multichannel inverse filtering method, i.e. the multiple input/output inverse theorem (MINT), is exploited in the CTF domain, and for the multi-source case, ii) a beamforming-like multichannel inverse filtering method applying single source MINT and using power minimization, which is suitable whenever the source CTFs are not all known, and iii) a constrained Lasso method, where the sources are recovered by minimizing the 1\ell_1-norm to impose their spectral sparsity, with the constraint that the 2\ell_2-norm fitting cost, between the microphone signals and the mixing model involving the unknown source signals, is less than a tolerance. The noise can be reduced by setting a tolerance onto the noise power. Experiments under various acoustic conditions are carried out to evaluate the three proposed methods. The comparison between them as well as with the baseline methods is presented.Comment: Submitted to IEEE/ACM Transactions on Audio, Speech and Language Processin

    An adaptive stereo basis method for convolutive blind audio source separation

    Get PDF
    NOTICE: this is the author’s version of a work that was accepted for publication in Neurocomputing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in PUBLICATION, [71, 10-12, June 2008] DOI:neucom.2007.08.02

    Room Equalization Based on Measurements of Moving Microphones

    Get PDF
    For enhancing the perceived quality of a sound played in a closed room, the methods of room impulse response equalization are used. For a better control of late echoes, new approaches, such as temporal masking, exploit the properties of the human auditory system. In order to allow the listener to move freely even in a small area, a whole volume has to be equalized. Traditional methods require a measurement of a huge amount of impulse responses in this volume. In this work we propose a method which allows for a greatly reduced measurement effort. Here, we employ a dynamic method for measurement of impulse responses using just one moving microphone. The reconstructed impulse responses are used to perform equalization and a simple interpolating technique allows for a equalization at the position of the listener, who can freely move inside the measured volume

    Advanced Channel Estimation Techniques for Multiple-Input Multiple-Output Multi-Carrier Systems in Doubly-Dispersive Channels

    Get PDF
    Flexible numerology of the physical layer has been introduced in the latest release of 5G new radio (NR) and the baseline waveform generation is chosen to be cyclic-prefix based orthogonal frequency division multiplexing (CP-OFDM). Thanks to the narrow subcarrier spacing and low complexity one tap equalization (EQ) of OFDM, it suits well to time-dispersive channels. For the upcoming 5G and beyond use-case scenarios, it is foreseen that the users might experience high mobility conditions. While the frame structure of the 5G NR is designed for long coherence times, the synchronization and channel estimation (CE) procedures are not fully and reliably covered for diverse applications. The research on alternative multi-carrier waveforms has brought up valuable results in terms of spectral efficiency, applications coexistence and flexibility. Nevertheless, the receiver design becomes more challenging for multiple-input multiple-output (MIMO) non-orthogonal multi-carriers because the receiver must deal with multiple dimensions of interference. This thesis aims to deliver accurate pilot-aided estimations of the wireless channel for coherent detection. Considering a MIMO non-orthogonal multi-carrier, e.g. generalized frequency division multiplexing (GFDM), we initially derive the classical and Bayesian estimators for rich multi-path fading channels, where we theoretically assess the choice of pilot design. Moreover, the well time- and frequency-localization of the pilots in non-orthogonal multi-carriers allows to reuse their energy from cyclic-prefix (CP). Taking advantage of this feature, we derive an iterative approach for joint CE and EQ of MIMO systems. Furthermore, exploiting the block-circularity of GFDM, we comprehensively analyze the complexity aspects, and propose a solution for low complexity implementation. Assuming very high mobility use-cases where the channel varies within the symbol duration, further considerations, particularly the channel coherence time must be taken into account. A promising candidate that is fully independent of the multi-carrier choice is unique word (UW) transmission, where the CP of random nature is replaced by a deterministic sequence. This feature, allows per-block synchronization and channel estimation for robust transmission over extremely doubly-dispersive channels. In this thesis, we propose a novel approach to extend the UW-based physical layer design to MIMO systems and we provide an in-depth study of their out-of-band emission, synchronization, CE and EQ procedures. Via theoretical derivations and simulation results, and comparisons with respect to the state-of-the-art CP-OFDM systems, we show that the proposed UW-based frame design facilitates robust transmission over extremely doubly-dispersive channels.:1 Introduction 1 1.1 Multi-Carrier Waveforms 1 1.2 MIMO Systems 3 1.3 Contributions and Thesis Structure 4 1.4 Notations 6 2 State-of-the-art and Fundamentals 9 2.1 Linear Systems and Problem Statement 9 2.2 GFDM Modulation 11 2.3 MIMO Wireless Channel 12 2.4 Classical and Bayesian Channel Estimation in MIMO OFDM Systems 15 2.5 UW-Based Transmission in SISO Systems 17 2.6 Summary 19 3 Channel Estimation for MIMO Non-Orthogonal Waveforms 21 3.1 Classical and Bayesian Channel Estimation in MIMO GFDM Systems 22 3.1.1 MIMO LS Channel Estimation 23 3.1.2 MIMO LMMSE Channel Estimation 24 3.1.3 Simulation Results 25 3.2 Basic Pilot Designs for GFDM Channel Estimation 29 3.2.1 LS/HM Channel Estimation 31 3.2.2 LMMSE Channel Estimation for GFDM 32 3.2.3 Error Characterization 33 3.2.4 Simulation Results 36 3.3 Interference-Free Pilot Insertion for MIMO GFDM Channel Estimation 39 3.3.1 Interference-Free Pilot Insertion 39 3.3.2 Pilot Observation 40 3.3.3 Complexity 41 3.3.4 Simulation Results 41 3.4 Bayesian Pilot- and CP-aided Channel Estimation in MIMO NonOrthogonal Multi-Carriers 45 3.4.1 Review on System Model 46 3.4.2 Single-Input-Single-Output Systems 47 3.4.3 Extension to MIMO 50 3.4.4 Application to GFDM 51 3.4.5 Joint Channel Estimation and Equalization via LMMSE Parallel Interference Cancellation 57 3.4.6 Complexity Analysis 61 3.4.7 Simulation Results 61 3.5 Pilot- and CP-aided Channel Estimation in Time-Varying Scenarios 67 3.5.1 Adaptive Filtering based on Wiener-Hopf Approac 68 3.5.2 Simulation Results 69 3.6 Summary 72 4 Design of UW-Based Transmission for MIMO Multi-Carriers 73 4.1 Frame Design, Efficiency and Overhead Analysis 74 4.1.1 Illustrative Scenario 74 4.1.2 CP vs. UW Efficiency Analysis 76 4.1.3 Numerical Results 77 4.2 Sequences for UW and OOB Radiation 78 4.2.1 Orthogonal Polyphase Sequences 79 4.2.2 Waveform Engineering for UW Sequences combined with GFDM 79 4.2.3 Simulation Results for OOB Emission of UW-GFDM 81 4.3 Synchronization 82 4.3.1 Transmission over a Centralized MIMO Wireless Channel 82 4.3.2 Coarse Time Acquisition 83 4.3.3 CFO Estimation and Removal 85 4.3.4 Fine Time Acquisition 86 4.3.5 Simulation Results 88 4.4 Channel Estimation 92 4.4.1 MIMO UW-based LMMSE CE 92 4.4.2 Adaptive Filtering 93 4.4.3 Circular UW Transmission 94 4.4.4 Simulation Results 95 4.5 Equalization with Imperfect Channel Knowledge 96 4.5.1 UW-Free Equalization 97 4.5.2 Simulation Results 99 4.6 Summary 102 5 Conclusions and Perspectives 103 5.1 Main Outcomes in Short 103 5.2 Open Challenges 105 A Complementary Materials 107 A.1 Linear Algebra Identities 107 A.2 Proof of lower triangular Toeplitz channel matrix being defective 108 A.3 Calculation of noise-plus-interference covariance matrix for Pilot- and CPaided CE 108 A.4 Bock diagonalization of the effective channel for GFDM 109 A.5 Detailed complexity analysis of Sec. 3.4 109 A.6 CRLB derivations for the pdf (4.24) 113 A.7 Proof that (4.45) emulates a circular CIR at the receiver 11

    An investigation of the utility of monaural sound source separation via nonnegative matrix factorization applied to acoustic echo and reverberation mitigation for hands-free telephony

    Get PDF
    In this thesis we investigate the applicability and utility of Monaural Sound Source Separation (MSSS) via Nonnegative Matrix Factorization (NMF) for various problems related to audio for hands-free telephony. We first investigate MSSS via NMF as an alternative acoustic echo reduction approach to existing approaches such as Acoustic Echo Cancellation (AEC). To this end, we present the single-channel acoustic echo problem as an MSSS problem, in which the objective is to extract the users signal from a mixture also containing acoustic echo and noise. To perform separation, NMF is used to decompose the near-end microphone signal onto the union of two nonnegative bases in the magnitude Short Time Fourier Transform domain. One of these bases is for the spectral energy of the acoustic echo signal, and is formed from the in- coming far-end user’s speech, while the other basis is for the spectral energy of the near-end speaker, and is trained with speech data a priori. In comparison to AEC, the speaker extraction approach obviates Double-Talk Detection (DTD), and is demonstrated to attain its maximal echo mitigation performance immediately upon initiation and to maintain that performance during and after room changes for similar computational requirements. Speaker extraction is also shown to introduce distortion of the near-end speech signal during double-talk, which is quantified by means of a speech distortion measure and compared to that of AEC. Subsequently, we address Double-Talk Detection (DTD) for block-based AEC algorithms. We propose a novel block-based DTD algorithm that uses the available signals and the estimate of the echo signal that is produced by NMF-based speaker extraction to compute a suitably normalized correlation-based decision variable, which is compared to a fixed threshold to decide on doubletalk. Using a standard evaluation technique, the proposed algorithm is shown to have comparable detection performance to an existing conventional block-based DTD algorithm. It is also demonstrated to inherit the room change insensitivity of speaker extraction, with the proposed DTD algorithm generating minimal false doubletalk indications upon initiation and in response to room changes in comparison to the existing conventional DTD. We also show that this property allows its paired AEC to converge at a rate close to the optimum. Another focus of this thesis is the problem of inverting a single measurement of a non- minimum phase Room Impulse Response (RIR). We describe the process by which percep- tually detrimental all-pass phase distortion arises in reverberant speech filtered by the inverse of the minimum phase component of the RIR; in short, such distortion arises from inverting the magnitude response of the high-Q maximum phase zeros of the RIR. We then propose two novel partial inversion schemes that precisely mitigate this distortion. One of these schemes employs NMF-based MSSS to separate the all-pass phase distortion from the target speech in the magnitude STFT domain, while the other approach modifies the inverse minimum phase filter such that the magnitude response of the maximum phase zeros of the RIR is not fully compensated. Subjective listening tests reveal that the proposed schemes generally produce better quality output speech than a comparable inversion technique

    An investigation of the utility of monaural sound source separation via nonnegative matrix factorization applied to acoustic echo and reverberation mitigation for hands-free telephony

    Get PDF
    In this thesis we investigate the applicability and utility of Monaural Sound Source Separation (MSSS) via Nonnegative Matrix Factorization (NMF) for various problems related to audio for hands-free telephony. We first investigate MSSS via NMF as an alternative acoustic echo reduction approach to existing approaches such as Acoustic Echo Cancellation (AEC). To this end, we present the single-channel acoustic echo problem as an MSSS problem, in which the objective is to extract the users signal from a mixture also containing acoustic echo and noise. To perform separation, NMF is used to decompose the near-end microphone signal onto the union of two nonnegative bases in the magnitude Short Time Fourier Transform domain. One of these bases is for the spectral energy of the acoustic echo signal, and is formed from the in- coming far-end user’s speech, while the other basis is for the spectral energy of the near-end speaker, and is trained with speech data a priori. In comparison to AEC, the speaker extraction approach obviates Double-Talk Detection (DTD), and is demonstrated to attain its maximal echo mitigation performance immediately upon initiation and to maintain that performance during and after room changes for similar computational requirements. Speaker extraction is also shown to introduce distortion of the near-end speech signal during double-talk, which is quantified by means of a speech distortion measure and compared to that of AEC. Subsequently, we address Double-Talk Detection (DTD) for block-based AEC algorithms. We propose a novel block-based DTD algorithm that uses the available signals and the estimate of the echo signal that is produced by NMF-based speaker extraction to compute a suitably normalized correlation-based decision variable, which is compared to a fixed threshold to decide on doubletalk. Using a standard evaluation technique, the proposed algorithm is shown to have comparable detection performance to an existing conventional block-based DTD algorithm. It is also demonstrated to inherit the room change insensitivity of speaker extraction, with the proposed DTD algorithm generating minimal false doubletalk indications upon initiation and in response to room changes in comparison to the existing conventional DTD. We also show that this property allows its paired AEC to converge at a rate close to the optimum. Another focus of this thesis is the problem of inverting a single measurement of a non- minimum phase Room Impulse Response (RIR). We describe the process by which percep- tually detrimental all-pass phase distortion arises in reverberant speech filtered by the inverse of the minimum phase component of the RIR; in short, such distortion arises from inverting the magnitude response of the high-Q maximum phase zeros of the RIR. We then propose two novel partial inversion schemes that precisely mitigate this distortion. One of these schemes employs NMF-based MSSS to separate the all-pass phase distortion from the target speech in the magnitude STFT domain, while the other approach modifies the inverse minimum phase filter such that the magnitude response of the maximum phase zeros of the RIR is not fully compensated. Subjective listening tests reveal that the proposed schemes generally produce better quality output speech than a comparable inversion technique

    Smart Sound Control in Acoustic Sensor Networks: a Perceptual Perspective

    Full text link
    [ES] Los sistemas de audio han experimentado un gran desarrollo en los últimos años gracias al aumento de dispositivos con procesadores de alto rendimiento capaces de realizar un procesamiento cada vez más eficiente. Además, las comunicaciones inalámbricas permiten a los dispositivos de una red estar ubicados en diferentes lugares sin limitaciones físicas. La combinación de estas tecnologías ha dado lugar a la aparición de las redes de sensores acústicos (ASN). Una ASN está compuesta por nodos equipados con transductores de audio, como micrófonos o altavoces. En el caso de la monitorización acústica del campo, sólo es necesario incorporar sensores acústicos a los nodos ASN. Sin embargo, en el caso de las aplicaciones de control, los nodos deben interactuar con el campo acústico a través de altavoces. La ASN puede implementarse mediante dispositivos de bajo coste, como Raspberry Pi o dispositivos móviles, capaces de gestionar varios micrófonos y altavoces y de ofrecer una buena capacidad de cálculo. Además, estos dispositivos pueden comunicarse mediante conexiones inalámbricas, como Wi-Fi o Bluetooth. Por lo tanto, en esta tesis, se propone una ASN compuesta por dispositivos móviles conectados a altavoces inalámbricos mediante un enlace Bluetooth. Además, el problema de la sincronización entre los dispositivos de una ASN es uno de los principales retos a abordar, ya que el rendimiento del procesamiento de audio es muy sensible a la falta de sincronismo. Por lo tanto, también se lleva a cabo un análisis del problema de sincronización entre dispositivos conectados a altavoces inalámbricos en una ASN. En este sentido, una de las principales aportaciones es el análisis de la latencia de audio cuando los nodos acústicos de la ASN están formados por dispositivos móviles que se comunican altavoces mediante enlaces Bluetooth. Una segunda contribución significativa de esta tesis es la implementación de un método para sincronizar los diferentes dispositivos de una ASN, junto con un estudio de sus limitaciones. Por último, se ha introducido el método propuesto para implementar aplicaciones de zonas sonoras personales (PSZ). Por lo tanto, la implementación y el análisis del rendimiento de diferentes aplicaciones de audio sobre una ASN compuesta por dispositivos móviles y altavoces inalámbricos es también una contribución significativa en el área de las ASN. Cuando el entorno acústico afecta negativamente a la percepción de la señal de audio emitida por los altavoces de la ASN, se uti­lizan técnicas de ecualización para mejorar la percepción de la señal de audio. Para ello, en esta tesis se implementa un sistema de ecualización inteligente. Para ello, se emplean algoritmos psicoacústicos para implementar un procesamiento inteligente basado en el sis­tema auditivo humano capaz de adaptarse a los cambios del entorno. Por ello, otra contribución importante de esta tesis es el análisis del enmas­caramiento espectral entre dos sonidos complejos. Este análisis permitirá calcular el umbral de enmascaramiento de un sonido con más precisión que los métodos utilizados actualmente. Este método se utiliza para implementar una aplicación de ecualización perceptiva que pretende mejorar la percepción de la señal de audio en presencia de un ruido ambien­tal. Para ello, esta tesis propone dos algoritmos de ecualización diferentes: 1) la pre-ecualización de la señal de audio para que se perciba por encima del umbral de enmascaramiento del ruido ambiental y 2) diseñar un con­trol de ruido ambiental perceptivo en los sistemas de ecualización activa de ruido (ANE), para que el nivel de ruido ambiental percibido esté por debajo del umbral de enmascaramiento de la señal de audio. Por lo tanto, la ultima aportación de esta tesis es la implementación de una aplicación de ecualización perceptiva con los dos diferentes algorit­mos de ecualización embebidos y el análisis de su rendimiento a través del banco de pruebas realizado en el laboratorio GTAC-iTEAM.[CA] El sistemes de so han experimentat un gran desenvolupament en els últims anys gràcies a l'augment de dispositius amb processadors d'alt rendiment capaços de realitzar un processament d'àudio cada vegada més eficient. D'altra banda, l'expansió de les comunicacions inalàmbriques ha permès implementar xarxes en les quals els dispositius poden estar situats a difer­ents llocs sense limitacions físiques. La combinació d'aquestes tecnologies ha donat lloc a l'aparició de les xarxes de sensors acústics (ASN). Una ASN està composta per nodes equipats amb transductors d'àudio, com micr`ofons o altaveus. En el cas del monitoratge del camp acústic, només cal incorporar sensors acústics als nodes de l'ASN. No obstant això, en el cas de les aplicacions de control, els nodes han d'interactuar amb el camp acústic a través d'altaveus. Una ASN pot implementar-se mitjant¿cant dispositius de baix cost, com ara Raspberry Pi o dispositius mòbils, capaços de gestionar di­versos micròfons i altaveus i d'oferir una bona capacitat computacional. A més, aquests dispositius poden comunicar-se a través de connexions inalàmbriques, com Wi-Fi o Bluetooth. Per això, en aquesta tesi es proposa una ASN composta per dispositius mòbils connectats a altaveus inalàmbrics a través d'un enllaç Bluetooth. El problema de la sincronització entre els dispositius d'una ASN és un dels principals reptes a abordar ja que el rendiment del processament d'àudio és molt sensible a la falta de sincronisme. Per tant, també es duu a terme una anàlisi profunda del problema de la sincronització entre els dispositius comercials connectats als altaveus inalàmbrics en una ASN. En aquest sentit, una de les principals contribucions és l'anàlisi de la latència d'àudio quan els nodes acústics en l'ASN estan compostos per dispositius mòbils que es comuniquen amb els altaveus corresponents mitjançant enllaços Bluetooth. Una segona contribuciò sig­nificativa d'aquesta tesi és la implementació d'un mètode per sincronitzar els diferents dispositius d'una ASN, juntament amb un estudi de les seves limitacions. Finalment, s'ha introduït el mètode proposat per implemen­tar aplicacions de zones de so personal. Per tant, la implementació i l'anàlisi del rendiment de diferents aplicacions d'àudio sobre una ASN composta per dispositius mòbils i al­taveus inalàmbrics és també una contribució significativa a l'àrea de les ASN. Quan l'entorn acústic afecta negativament a la percepció del senyal d'àudio emesa pels altaveus de l'ASN, es fan servir tècniques d'equalització per a millorar la percepció del senyal d'àudio. En consequència, en aquesta tesi s'implementa un sistema d'equalització intel·ligent. Per això, s'utilitzen algoritmes psicoacústics per implementar un processament intel·ligent basat en el sistema audi­tiu humà capaç d'adaptar-se als canvis de l'entorn. Per aquest motiu, una altra contribució important d'aquesta tesi és l'anàlisi de l'emmascarament espectral entre dos sons complexos. Aquesta anàlisi permetrà calcular el llindar d'emmascarament d'un so sobre amb més precisió que els mètodes utilitzats actualment. Aquest mètode s'utilitza per a imple­mentar una aplicació d'equalització perceptual que pretén millorar la per­cepció del senyal d'àudio en presència d'un soroll ambiental. Per això, aquesta tesi proposa dos algoritmes d'equalització diferents: 1) la pree­qualització del senyal d'àudio perquè es percebi per damunt del llindar d'emmascarament del soroll ambiental i 2) dissenyar un control de soroll ambiental perceptiu en els sistemes d'equalització activa de soroll (ANE) de manera que el nivell de soroll ambiental percebut estiga per davall del llindar d'emmascarament del senyal d'àudio. Per tant, l'última aportació d'aquesta tesi és la implementació d'una aplicació d'equalització perceptiva amb els dos algoritmes d'equalització embeguts i l'anàlisi del seu rendiment a través del banc de proves realitzat al laboratori GTAC-iTEAM.[EN] Audio systems have been extensively developed in recent years thanks to the increase of devices with high-performance processors able to per­form more efficient processing. In addition, wireless communications allow devices in a network to be located in different places without physical limitations. The combination of these technologies has led to the emergence of Acoustic Sensor Networks (ASN). An ASN is com­posed of nodes equipped with audio transducers, such as microphones or speakers. In the case of acoustic field monitoring, only acoustic sensors need to be incorporated into the ASN nodes. However, in the case of control applications, the nodes must interact with the acoustic field through loudspeakers. ASN can be implemented through low-cost devices, such as Rasp­berry Pi or mobile devices, capable of managing multiple mi­crophones and loudspeakers and offering good computational capacity. In addition, these devices can communicate through wireless connections, such as Wi-Fi or Bluetooth. Therefore, in this dissertation, an ASN composed of mobile devices connected to wireless speak­ers through a Bluetooth link is proposed. Additionally, the problem of syn­chronization between the devices in an ASN is one of the main challenges to be addressed since the audio processing performance is very sensitive to the lack of synchronism. Therefore, an analysis of the synchroniza­tion problem between devices connected to wireless speakers in an ASN is also carried out. In this regard, one of the main contributions is the analysis of the audio latency of mobile devices when the acoustic nodes in the ASN are comprised of mobile devices communicating with the corresponding loudspeakers through Bluetooth links. A second significant contribution of this dissertation is the implementation of a method to synchronize the different devices of an ASN, together with a study of its limitations. Finally, the proposed method has been introduced in order to implement personal sound zones (PSZ) applications. Therefore, the imple­mentation and analysis of the performance of different audio applications over an ASN composed of mobile devices and wireless speakers is also a significant contribution in the area of ASN. In cases where the acoustic environment negatively affects the percep­tion of the audio signal emitted by the ASN loudspeakers, equalization techniques are used with the objective of enhancing the perception thresh­old of the audio signal. For this purpose, a smart equalization system is implemented in this dissertation. In this regard, psychoacous­tic algorithms are employed to implement a smart processing based on the human hearing system capable of adapting to changes in the envi­ronment. Therefore, another important contribution of this thesis focuses on the analysis of the spectral masking between two complex sounds. This analysis will allow to calculate the masking threshold of one sound over the other in a more accurate way than the currently used methods. This method is used to implement a perceptual equalization application that aims to improve the perception threshold of the audio signal in presence of ambient noise. To this end, this thesis proposes two different equalization algorithms: 1) pre-equalizing the audio signal so that it is perceived above the ambient noise masking threshold and 2) designing a perceptual control of ambient noise in active noise equalization (ANE) systems, so that the perceived ambient noise level is below the masking threshold of the audio signal. Therefore, the last contribution of this dissertation is the imple­mentation of a perceptual equalization application with the two different embedded equalization algorithms and the analysis of their performance through the testbed carried out in the GTAC-iTEAM laboratory.This work has received financial support of the following projects: • SSPRESING: Smart Sound Processing for the Digital Living (Reference: TEC2015-67387-C4-1-R. Entity: Ministerio de Economia y Empresa. Spain). • FPI: Ayudas para contratos predoctorales para la formación de doctores (Reference: BES-2016-077899. Entity: Agencia Estatal de Investigación. Spain). DANCE: Dynamic Acoustic Networks for Changing Environments (Reference: RTI2018-098085-B-C41-AR. Entity: Agencia Estatal de Investigación. Spain). • DNOISE: Distributed Network of Active Noise Equalizers for Multi-User Sound Control (Reference: H2020-FETOPEN-4-2016-2017. Entity: I+D Colaborativa competitiva. Comisión de las comunidades europea).Estreder Campos, J. (2022). Smart Sound Control in Acoustic Sensor Networks: a Perceptual Perspective [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181597TESI
    corecore