522 research outputs found

    Design exploration and performance strategies towards power-efficient FPGA-based achitectures for sound source localization

    Get PDF
    Many applications rely on MEMS microphone arrays for locating sound sources prior to their execution. Those applications not only are executed under real-time constraints but also are often embedded on low-power devices. These environments become challenging when increasing the number of microphones or requiring dynamic responses. Field-Programmable Gate Arrays (FPGAs) are usually chosen due to their flexibility and computational power. This work intends to guide the design of reconfigurable acoustic beamforming architectures, which are not only able to accurately determine the sound Direction-Of-Arrival (DoA) but also capable to satisfy the most demanding applications in terms of power efficiency. Design considerations of the required operations performing the sound location are discussed and analysed in order to facilitate the elaboration of reconfigurable acoustic beamforming architectures. Performance strategies are proposed and evaluated based on the characteristics of the presented architecture. This power-efficient architecture is compared to a different architecture prioritizing performance in order to reveal the unavoidable design trade-offs

    CABE : a cloud-based acoustic beamforming emulator for FPGA-based sound source localization

    Get PDF
    Microphone arrays are gaining in popularity thanks to the availability of low-cost microphones. Applications including sonar, binaural hearing aid devices, acoustic indoor localization techniques and speech recognition are proposed by several research groups and companies. In most of the available implementations, the microphones utilized are assumed to offer an ideal response in a given frequency domain. Several toolboxes and software can be used to obtain a theoretical response of a microphone array with a given beamforming algorithm. However, a tool facilitating the design of a microphone array taking into account the non-ideal characteristics could not be found. Moreover, generating packages facilitating the implementation on Field Programmable Gate Arrays has, to our knowledge, not been carried out yet. Visualizing the responses in 2D and 3D also poses an engineering challenge. To alleviate these shortcomings, a scalable Cloud-based Acoustic Beamforming Emulator (CABE) is proposed. The non-ideal characteristics of microphones are considered during the computations and results are validated with acoustic data captured from microphones. It is also possible to generate hardware description language packages containing delay tables facilitating the implementation of Delay-and-Sum beamformers in embedded hardware. Truncation error analysis can also be carried out for fixed-point signal processing. The effects of disabling a given group of microphones within the microphone array can also be calculated. Results and packages can be visualized with a dedicated client application. Users can create and configure several parameters of an emulation, including sound source placement, the shape of the microphone array and the required signal processing flow. Depending on the user configuration, 2D and 3D graphs showing the beamforming results, waterfall diagrams and performance metrics can be generated by the client application. The emulations are also validated with captured data from existing microphone arrays.</jats:p

    A robust sequential hypothesis testing method for brake squeal localisation

    Get PDF
    This contribution deals with the in situ detection and localisation of brake squeal in an automobile. As brake squeal is emitted from regions known a priori, i.e., near the wheels, the localisation is treated as a hypothesis testing problem. Distributed microphone arrays, situated under the automobile, are used to capture the directional properties of the sound field generated by a squealing brake. The spatial characteristics of the sampled sound field is then used to formulate the hypothesis tests. However, in contrast to standard hypothesis testing approaches of this kind, the propagation environment is complex and time-varying. Coupled with inaccuracies in the knowledge of the sensor and source positions as well as sensor gain mismatches, modelling the sound field is difficult and standard approaches fail in this case. A previously proposed approach implicitly tried to account for such incomplete system knowledge and was based on ad hoc likelihood formulations. The current paper builds upon this approach and proposes a second approach, based on more solid theoretical foundations, that can systematically account for the model uncertainties. Results from tests in a real setting show that the proposed approach is more consistent than the prior state-of-the-art. In both approaches, the tasks of detection and localisation are decoupled for complexity reasons. The localisation (hypothesis testing) is subject to a prior detection of brake squeal and identification of the squeal frequencies. The approaches used for the detection and identification of squeal frequencies are also presented. The paper, further, briefly addresses some practical issues related to array design and placement. (C) 2019 Author(s)

    MICROPHONE ARRAY OPTIMIZATION IN IMMERSIVE ENVIRONMENTS

    Get PDF
    The complex relationship between array gain patterns and microphone distributions limits the application of traditional optimization algorithms on irregular arrays, which show enhanced beamforming performance for human speech capture in immersive environments. This work analyzes the relationship between irregular microphone geometries and spatial filtering performance with statistical methods. Novel geometry descriptors are developed to capture the properties of irregular microphone distributions showing their impact on array performance. General guidelines and optimization methods for regular and irregular array design are proposed in immersive (near-field) environments to obtain superior beamforming ability for speech applications. Optimization times are greatly reduced through the objective function rules using performance-based geometric descriptions of microphone distributions that circumvent direct array gain computations over the space of interest. In addition, probabilistic descriptions of acoustic scenes are introduced to incorporate various levels of prior knowledge for the source distribution. To verify the effectiveness of the proposed optimization methods, simulated gain patterns and real SNR results of the optimized arrays are compared to corresponding traditional regular arrays and arrays obtained from direct exhaustive searching methods. Results show large SNR enhancements for the optimized arrays over arbitrary randomly generated arrays and regular arrays, especially at low microphone densities. The rapid convergence and acceptable processing times observed during the experiments establish the feasibility of proposed optimization methods for array geometry design in immersive environments where rapid deployment is required with limited knowledge of the acoustic scene, such as in mobile platforms and audio surveillance applications

    Speech Enhancement using Multiple Transducers

    No full text
    In this thesis, three methods of speech enhancement techniques are investigated with applications in extreme noise environments. Various beamforming techniques are evaluated for their performance characteristics in terms of signal to (distant) noise ratio and tolerance to design imperfections. Two suitable designs are identified with contrasting performance characteristics — the second order differential array, with excellent noise rejection but poor robustness; and a least squares design, with adequate noise rejection and good robustness. Adaptive filters are introduced in the context of a simple noise canceller and later a post-processor for a dual beamformer system. Modifications to the least mean squares (LMS) filter are introduced to tolerate cross-talk between microphones or beamformer outputs. An adaptive filter based post-processor beamforming system is designed and evaluated using a simulation involving speech in noisy environments. The beamforming methods developed are combined with the modified LMS adaptive filter to further reduce noise (if possible) based on correlations between noise signals in a beamformer directed to the talker and a complementary beamformer (nullformer) directed away from the talker. This system shows small, but not insignificant, improvements in noise reduction over purely beamforming based methods. Blind source separation is introduced briefly as a potential future method for enhancing speech in noisy environments. The FastICA algorithm is evaluated on existing data sets and found to perform similarly to the post-processing system developed in this thesis. Future avenues of research in this field are highlighted

    Towards Unified All-Neural Beamforming for Time and Frequency Domain Speech Separation

    Full text link
    Recently, frequency domain all-neural beamforming methods have achieved remarkable progress for multichannel speech separation. In parallel, the integration of time domain network structure and beamforming also gains significant attention. This study proposes a novel all-neural beamforming method in time domain and makes an attempt to unify the all-neural beamforming pipelines for time domain and frequency domain multichannel speech separation. The proposed model consists of two modules: separation and beamforming. Both modules perform temporal-spectral-spatial modeling and are trained from end-to-end using a joint loss function. The novelty of this study lies in two folds. Firstly, a time domain directional feature conditioned on the direction of the target speaker is proposed, which can be jointly optimized within the time domain architecture to enhance target signal estimation. Secondly, an all-neural beamforming network in time domain is designed to refine the pre-separated results. This module features with parametric time-variant beamforming coefficient estimation, without explicitly following the derivation of optimal filters that may lead to an upper bound. The proposed method is evaluated on simulated reverberant overlapped speech data derived from the AISHELL-1 corpus. Experimental results demonstrate significant performance improvements over frequency domain state-of-the-arts, ideal magnitude masks and existing time domain neural beamforming methods

    Spatial dissection of a soundfield using spherical harmonic decomposition

    Get PDF
    A real-world soundfield is often contributed by multiple desired and undesired sound sources. The performance of many acoustic systems such as automatic speech recognition, audio surveillance, and teleconference relies on its ability to extract the desired sound components in such a mixed environment. The existing solutions to the above problem are constrained by various fundamental limitations and require to enforce different priors depending on the acoustic condition such as reverberation and spatial distribution of sound sources. With the growing emphasis and integration of audio applications in diverse technologies such as smart home and virtual reality appliances, it is imperative to advance the source separation technology in order to overcome the limitations of the traditional approaches. To that end, we exploit the harmonic decomposition model to dissect a mixed soundfield into its underlying desired and undesired components based on source and signal characteristics. By analysing the spatial projection of a soundfield, we achieve multiple outcomes such as (i) soundfield separation with respect to distinct source regions, (ii) source separation in a mixed soundfield using modal coherence model, and (iii) direction of arrival (DOA) estimation of multiple overlapping sound sources through pattern recognition of the modal coherence of a soundfield. We first employ an array of higher order microphones for soundfield separation in order to reduce hardware requirement and implementation complexity. Subsequently, we develop novel mathematical models for modal coherence of noisy and reverberant soundfields that facilitate convenient ways for estimating DOA and power spectral densities leading to robust source separation algorithms. The modal domain approach to the soundfield/source separation allows us to circumvent several practical limitations of the existing techniques and enhance the performance and robustness of the system. The proposed methods are presented with several practical applications and performance evaluations using simulated and real-life dataset

    Sensory Communication

    Get PDF
    Contains table of contents for Section 2, an introduction, reports on nine research projects and a list of publications.National Institutes of Health Grant 5 R01 DC00117National Institutes of Health Grant 2 R01 DC00270National Institutes of Health Grant 1 P01 DC00361National Institutes of Health Grant 2 R01 DC00100National Institutes of Health Grant FV00428National Institutes of Health Grant 5 R01 DC00126U.S. Air Force - Office of Scientific Research Grant AFOSR 90-200U.S. Navy - Office of Naval Research Grant N00014-90-J-1935National Institutes of Health Grant 5 R29 DC0062
    • …
    corecore