182 research outputs found

    Localization of Sound Sources in a Room with One Microphone

    Get PDF
    Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room

    Nearfield Acoustic Holography using sparsity and compressive sampling principles

    Get PDF
    Regularization of the inverse problem is a complex issue when using Near-field Acoustic Holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, new regularization schemes can be developed, based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e. the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these new techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of Compressive Sampling: under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization.Comment: Journal of the Acoustical Society of America (2012

    Rake, Peel, Sketch:The Signal Processing Pipeline Revisited

    Get PDF
    The prototypical signal processing pipeline can be divided into four blocks. Representation of the signal in a basis suitable for processing. Enhancement of the meaningful part of the signal and noise reduction. Estimation of important statistical properties of the signal. Adaptive processing to track and adapt to changes in the signal statistics. This thesis revisits each of these blocks and proposes new algorithms, borrowing ideas from information theory, theoretical computer science, or communications. First, we revisit the Walsh-Hadamard transform (WHT) for the case of a signal sparse in the transformed domain, namely that has only K †N non-zero coefficients. We show that an efficient algorithm exists that can compute these coefficients in O(K log2(K) log2(N/K)) and using only O(K log2(N/K)) samples. This algorithm relies on a fast hashing procedure that computes small linear combinations of transformed domain coefficients. A bipartite graph is formed with linear combinations on one side, and non-zero coefficients on the other. A peeling decoder is then used to recover the non-zero coefficients one by one. A detailed analysis of the algorithm based on error correcting codes over the binary erasure channel is given. The second chapter is about beamforming. Inspired by the rake receiver from wireless communications, we recognize that echoes in a room are an important source of extra signal diversity. We extend several classic beamforming algorithms to take advantage of echoes and also propose new optimal formulations. We explore formulations both in time and frequency domains. We show theoretically and in numerical simulations that the signal-to-interference-and-noise ratio increases proportionally to the number of echoes used. Finally, beyond objective measures, we show that echoes also directly improve speech intelligibility as measured by the perceptual evaluation of speech quality (PESQ) metric. Next, we attack the problem of direction of arrival of acoustic sources, to which we apply a robust finite rate of innovation reconstruction framework. FRIDA â the resulting algorithm â exploits wideband information coherently, works at very low signal-to-noise ratio, and can resolve very close sources. The algorithm can use either raw microphone signals or their cross- correlations. While the former lets us work with correlated sources, the latter creates a quadratic number of measurements that allows to locate many sources with few microphones. Thorough experiments on simulated and recorded data shows that FRIDA compares favorably with the state-of-the-art. We continue by revisiting the classic recursive least squares (RLS) adaptive filter with ideas borrowed from recent results on sketching least squares problems. The exact update of RLS is replaced by a few steps of conjugate gradient descent. We propose then two different precondi- tioners, obtained by sketching the data, to accelerate the convergence of the gradient descent. Experiments on artificial as well as natural signals show that the proposed algorithm has a performance very close to that of RLS at a lower computational burden. The fifth and final chapter is dedicated to the software and hardware tools developed for this thesis. We describe the pyroomacoustics Python package that contains routines for the evaluation of audio processing algorithms and reference implementations of popular algorithms. We then give an overview of the microphone arrays developed

    AN EFFICIENT AND ROBUST MULTI-STREAM FRAMEWORK FOR END-TO-END SPEECH RECOGNITION

    Get PDF
    In voice-enabled domestic or meeting environments, distributed microphone arrays aim to process distant-speech interaction into text with high accuracy. However, with dynamic corruption of noises and reverberations or human movement present, there is no guarantee that any microphone array (stream) is constantly informative. In these cases, an appropriate strategy to dynamically fuse streams is necessary. The multi-stream paradigm in Automatic Speech Recognition (ASR) considers scenarios where parallel streams carry diverse or complementary task-related knowledge. Such streams could be defined as microphone arrays, frequency bands, various modalities or etc. Hence, a robust stream fusion is crucial to emphasize on more informative streams than corrupted ones, especially under unseen conditions. This thesis focuses on improving the performance and robustness of speech recognition in multi-stream scenarios. With increasing use of Deep Neural Networks (DNNs) in ASR, End-to-End (E2E) approaches, which directly transcribe human speech into text, have received greater attention. In this thesis, a multi-stream framework is presented based on the joint Connectionist Temporal Classification/ATTention (CTC/ATT) E2E model, where parallel streams are represented by separate encoders. On top of regular attention networks, a secondary stream-fusion network is to steer the decoder toward the most informative streams. The MEM-Array model aims at improving the far-field ASR robustness using microphone arrays which are activated by separate encoders. With an increasing number of streams (encoders) requiring substantial memory and massive amounts of parallel data, a practical two-stage training strategy is designated to address these issues. Furthermore, a two-stage augmentation scheme is present to improve robustness of the multi-stream model. In MEM-Res, two heterogeneous encoders with different architectures, temporal resolutions and separate CTC networks work in parallel to extract complementary information from the same acoustics. Compared with the best single-stream performance, both models have achieved substantial improvement, outperforming alternative fusion strategies. While the proposed framework optimizes information in multi-stream scenarios, this thesis also studies the Performance Monitoring (PM) measures to predict if recognition results of an E2E model are reliable without growth-truth knowledge. Four PM techniques are investigated, suggesting that PM measures on attention distributions and decoder posteriors are well-correlated with true performances

    Assessing HRTF preprocessing methods for Ambisonics rendering through perceptual models

    Get PDF
    Binaural rendering of Ambisonics signals is a common way to reproduce spatial audio content. Processing Ambisonics signals at low spatial orders is desirable in order to reduce complexity, although it may degrade the perceived quality, in part due to the mismatch that occurs when a low-order Ambisonics signal is paired with a spatially dense head-related transfer function (HRTF). In order to alleviate this issue, the HRTF may be preprocessed so its spatial order is reduced. Several preprocessing methods have been proposed, but they have not been thoroughly compared yet. In this study, nine HRTF preprocessing methods were used to render anechoic binaural signals from Ambisonics representations of orders 1 to 44, and these were compared through perceptual hearing models in terms of localisation performance, externalisation and speech reception. This assessment was supported by numerical analyses of HRTF interpolation errors, interaural differences, perceptually-relevant spectral differences, and loudness stability. Models predicted that the binaural renderings’ accuracy increased with spatial order, as expected. A notable effect of the preprocessing method was observed: whereas all methods performed similarly at the highest spatial orders, some were considerably better at lower orders. A newly proposed method, BiMagLS, displayed the best performance overall and is recommended for the rendering of bilateral Ambisonics signals. The results, which were in line with previous literature, indirectly validate the perceptual models’ ability to predict listeners’ responses in a consistent and explicable manner

    Adaptive frequency domain identification for ANC systems using non-stationary signals

    Get PDF
    The problem of identification of secondary path in active noise control applications is dealt with fundamentally using time-domain adaptive filters. The use of adaptive frequency domain subband identification as an alternative has some significant advantages which are overlooked in such applications. In this paper two different delayless subband adaptive algorithms for identification of an unknown secondary path in an ANC framework are utilized and compared. Despite of reduced computational complexity and increase convergence rate this approach allows us to use non-stationary audio signals as the excitation input to avoid injection of annoying white noise. For this purpose two non-stationary music and speech signals are used for identification. The performances of the algorithms are measured in terms of minimum mean square error and convergence speed. The results are also compared to a fullband algorithm for the same scenario. The proposed delayless algorithms have a closed loop structure with DFT filterbanks as the analysis filter. To eliminate the delay in the signal path two different weights transformation schemes are compared

    Acoustic Source Localisation in constrained environments

    Get PDF
    Acoustic Source Localisation (ASL) is a problem with real-world applications across multiple domains, from smart assistants to acoustic detection and tracking. And yet, despite the level of attention in recent years, a technique for rapid and robust ASL remains elusive – not least in the constrained environments in which such techniques are most likely to be deployed. In this work, we seek to address some of these current limitations by presenting improvements to the ASL method for three commonly encountered constraints: the number and configuration of sensors; the limited signal sampling potentially available; and the nature and volume of training data required to accurately estimate Direction of Arrival (DOA) when deploying a particular supervised machine learning technique. In regard to the number and configuration of sensors, we find that accuracy can be maintained at state-of-the-art levels, Steered Response Power (SRP), while reducing computation sixfold, based on direct optimisation of well known ASL formulations. Moreover, we find that the circular microphone configuration is the least desirable as it yields the highest localisation error. In regard to signal sampling, we demonstrate that the computer vision inspired algorithm presented in this work, which extracts selected keypoints from the signal spectrogram, and uses them to select signal samples, outperforms an audio fingerprinting baseline while maintaining a compression ratio of 40:1. In regard to the training data employed in machine learning ASL techniques, we show that the use of music training data yields an improvement of 19% against a noise data baseline while maintaining accuracy using only 25% of the training data, while training with speech as opposed to noise improves DOA estimation by an average of 17%, outperforming the Generalised Cross-Correlation technique by 125% in scenarios in which the test and training acoustic environments are matched.Heriot-Watt University James Watt Scholarship (JSW) in the School of Engineering & Physical Sciences

    Cough Monitoring Through Audio Analysis

    Get PDF
    The detection of cough events in audio recordings requires the analysis of a significant amount of data as cough is typically monitored continuously over several hours to capture naturally occurring cough events. The recorded data is mostly composed of undesired sound events such as silence, background noise, and speech. To reduce computational costs and to address the ethical concerns raised from the collection of audio data in public environments, the data requires pre-processing prior to any further analysis. Current cough detection algorithms typically use pre-processing methods to remove undesired audio segments from the collected data but do not preserve the privacy of individuals being recorded while monitoring respiratory events. This study reveals the need for an automatic pre-processing method that removes sensitive data from the recording prior to any further analysis to ensure privacy preservation of individuals. Specific characteristics of cough sounds can be used to discard sensitive data from audio recordings at a pre-processing stage, improving privacy preservation, and decreasing ethical concerns when dealing with cough monitoring through audio analysis. We propose a pre-processing algorithm that increases privacy preservation and significantly decreases the amount of data to be analysed, by separating cough segments from other non-cough segments, including speech, in audio recordings. Our method verifies the presence of signal energy in both lower and higher frequency regions and discards segments whose energy concentrates only on one of them. The method is iteratively applied on the same data to increase the percentage of data reduction and privacy preservation. We evaluated the performance of our algorithm using several hours of audio recordings with manually pre-annotated cough and speech events. Our results showed that 5 iterations of the proposed method can discard up to 88.94% of the speech content present in the recordings, allowing for a strong privacy preservation while considerably reducing the amount of data to be further analysed by 91.79%. The data reduction and privacy preservation achievements of the proposed pre-processing algorithm offers the possibility to use larger datasets captured in public environments and would beneficiate all cough detection algorithms by preserving the privacy of subjects and by-stander conversations recorded during cough monitoring
    • …
    corecore