1,188 research outputs found

    SoundCompass: a distributed MEMS microphone array-based sensor for sound source localization

    Get PDF
    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass's hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field

    ChordMics: Acoustic Signal Purification with Distributed Microphones

    Full text link
    Acoustic signal acts as an essential input to many systems. However, the pure acoustic signal is very difficult to extract, especially in noisy environments. Existing beamforming systems are able to extract the signal transmitted from certain directions. However, since microphones are centrally deployed, these systems have limited coverage and low spatial resolution. We overcome the above limitations and present ChordMics, a distributed beamforming system. By leveraging the spatial diversity of the distributed microphones, ChordMics is able to extract the acoustic signal from arbitrary points. To realize such a system, we further address the fundamental challenge in distributed beamforming: aligning the signals captured by distributed and unsynchronized microphones. We implement ChordMics and evaluate its performance under both LOS and NLOS scenarios. The evaluation results tell that ChordMics can deliver higher SINR than the centralized microphone array. The average performance gain is up to 15dB

    Generalized DOA and Source Number Estimation Techniques for Acoustics and Radar

    Get PDF
    The purpose of this thesis is to emphasize the lacking areas in the field of direction of arrival estimation and to propose building blocks for continued solution development in the area. A review of current methods are discussed and their pitfalls are emphasized. DOA estimators are compared to each other for usage on a conformal microphone array which receives impulsive, wideband signals. Further, many DOA estimators rely on the number of source signals prior to DOA estimation. Though techniques exist to achieve this, they lack robustness to estimate for certain signal types, particularly in the case where multiple radar targets exist in the same range bin. A deep neural network approach is proposed and evaluated for this particular case. The studies detailed in this thesis are specific to acoustic and radar applications for DOA estimation

    Informed Sound Source Localization for Hearing Aid Applications

    Get PDF

    3D sound field analysis using circular higher-order microphone array

    Get PDF
    This paper proposes the theory and design of circular higher-order microphone arrays for 3D sound field analysis using spherical harmonics. Through employing the spherical harmonic translation theorem, the local spatial sound fields recorded by each higher-order microphone placed in the circular arrays are combined to form the sound field information of a large global spherical region. The proposed design reduces the number of the required sampling points and the geometrical complexity of microphone arrays. We develop a two-step method to calculate sound field coefficients using the proposed array structure, i) analytically combine local sound field coefficients on each circular array and ii) solve for global sound field coefficients using data from the first step. Simulation and experimental results show that the proposed array is capable of acquiring the full 3D sound field information over a relatively large spherical region with decent accuracy and computational simplicity.This work was supported under the Australian Research Councils Discovery Projects funding scheme (project no. DP140103412)

    Sample Drop Detection for Distant-speech Recognition with Asynchronous Devices Distributed in Space

    Full text link
    In many applications of multi-microphone multi-device processing, the synchronization among different input channels can be affected by the lack of a common clock and isolated drops of samples. In this work, we address the issue of sample drop detection in the context of a conversational speech scenario, recorded by a set of microphones distributed in space. The goal is to design a neural-based model that given a short window in the time domain, detects whether one or more devices have been subjected to a sample drop event. The candidate time windows are selected from a set of large time intervals, possibly including a sample drop, and by using a preprocessing step. The latter is based on the application of normalized cross-correlation between signals acquired by different devices. The architecture of the neural network relies on a CNN-LSTM encoder, followed by multi-head attention. The experiments are conducted using both artificial and real data. Our proposed approach obtained F1 score of 88% on an evaluation set extracted from the CHiME-5 corpus. A comparable performance was found in a larger set of experiments conducted on a set of multi-channel artificial scenes.Comment: Submitted to ICASSP 202
    • …
    corecore