340 research outputs found

    Acoustic Impulse Responses for Wearable Audio Devices

    Full text link
    We present an open-access dataset of over 8000 acoustic impulse from 160 microphones spread across the body and affixed to wearable accessories. The data can be used to evaluate audio capture and array processing systems using wearable devices such as hearing aids, headphones, eyeglasses, jewelry, and clothing. We analyze the acoustic transfer functions of different parts of the body, measure the effects of clothing worn over microphones, compare measurements from a live human subject to those from a mannequin, and simulate the noise-reduction performance of several beamformers. The results suggest that arrays of microphones spread across the body are more effective than those confined to a single device.Comment: To appear at ICASSP 201

    SoundCompass: a distributed MEMS microphone array-based sensor for sound source localization

    Get PDF
    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass's hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field

    Shooter localization and weapon classification with soldier-wearable networked sensors

    Get PDF
    The paper presents a wireless sensor network-based mobile countersniper system. A sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA. A 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier’s PDA running the data fusion and the user interface. The heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors. The system estimates the trajectory, the range, the caliber and the weapon type. The paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center. The system performance is characterized by 1-degree trajectory precision and over 95 % caliber estimation accuracy for all shots, and close to 100 % weapon estimation accuracy for 4 out of 6 guns tested

    Spherical Harmonic Decomposition of a Sound Field Based on Microphones Around the Circumference of a Human Head

    Get PDF
    We present a method for decomposing a sound field into spherical harmonics (SH) based on observations of the sound field around the circumference of a human head. The method is based on the analytical solution for observations of the sound field along the equator of a rigid sphere that we presented recently. The present method incorporates a calibration stage in which the microphone signals for sound sources at a suitable set of calibration positions are projected onto the SH decomposition of the same sound field on the surface of a notional rigid sphere by means of a linear filtering operation. The filter coefficients are computed from the calibration data via a least-squares fit. We present an evaluation of the method based on binaural rendering of numerically simulated signals for an array of 18 microphones providing 8th SH order to demonstrate its effectiveness

    How wearing headgear affects measured head-related transfer functions

    Get PDF
    International audienceThe spatial representation of sound sources is an essential element of virtual acoustic environments (VAEs). When determining the sound incidence direction, the human auditory system evaluates monaural and binaural cues, which are caused by the shape of the pinna and the head. While spectral information is the most important cue for elevation of a sound source, we use differences between the signals reaching the left and the right ear for lateral localization. These binaural differences manifest in interaural time differences (ITDs) and interaural level differences (ILDs). In many headphone-based VAEs, head-related transfer functions (HRTFs) are used to describe the sound incidence from a source to the left and right ear, thus integrating both monaural and the binaural cues. Specific aspects, like for example the individual shape of the head and the outer ears (e.g. Bomhardt, 2017), of the torso (Brinkmann et al., 2015), and probably even of headgear (Wersenyi, 2005; Wersenyi, 2017) influence the HRTFs and thus probably as well localization and other perceptual attributes.<par>Generally speaking, spatial cues are modified by headgear, for example by wearing a baseball cap, a bicycle helmet, or a head-mounted display, which nowadays is often used in VR applications. In many real life situations, however, a good localization performance is important when wearing such items, e.g. in order to determine approaching vehicles when cycling. Furthermore, when performing psychoacoustic experiments in mixed-reality applications using head-mounted displays, the influence of the head-mounted display on the HRTFs must be considered. Effects of an HTC Vive head-mounted display on localization performance have already been shown in Ahrens et al. (2018). To analyze the influence of headgear for varying directions of incidence, measurements of HRTFs on a dense spherical sampling grid are required. However, HRTF measurements of a dummy head with various headgear are still rare, and to our knowledge only one dataset measured for an HTC Vice on a sparse grid with 64 positions is freely accessible (Ahrens, 2018).<par>This work presents high-density measurement data of HRTFs from a Neumann KU100 and a HEAD acoustics HMS II.3 dummy head, either equipped with a bicycle helmet, a baseball cap, an Oculus Rift head-mounted display, or a set of extra-aural AKG K1000 headphones. For the measurements, we used the VariSphear measurement system (Bernschütz, 2010), allowing precise positioning of the dummy head at the spatial sampling positions. The various HRTF sets were captured on a full spherical Lebedev grid with 2702 points.<par>In our study, we analyze the measured datasets in terms of their spectrum, their binaural cues, and regarding their localization performance based on localization models, and compare the results to reference measurements of the dummy heads without headgear. The results show that differences to the reference without headgear vary significantly depending on the type of the headgear. Regarding the ITDs and ILDs, the analysis reveals the highest influences for the AKG K1000. While for the Oculus Rift head-mounted display, the ITDs and ILDs are mainly affected for frontal directions, only a very weak influence of the bicycle helmet and the baseball cap on ITDs and ILDs was observed. For the spectral differences to the reference the results show maximal deviations for the AKG K1000, the lowest for the Oculus Rift and the baseball cap. Furthermore, we analyzed for which incidence directions the spectrum is influenced most by the headgears. For the Oculus Rift and the baseball cap, the strongest deviations were found for contralateral sound incidence. For the bicycle helmet, the directions mostly affected are as well contralateral, but shifted upwards in elevation. Finally, the AKG K1000 headphones generally has the highest influence on the measured HRTFs, which becomes maximal for sound incidence from behind.<par>The results of this study are relevant for applications where headgears are worn and localization or other aspects of spatial hearing are considered. This could be the case, for example in mixed-reality applications where natural sound sources are presented while the listener is wearing a head-mounted display, or when investigating localization performance in certain situations, e.g. in sports activities where headgears are used. However, it is an important intention of this study to provide a freely available database of HRTF sets which is well suited for auralization purposes and which allows to further investigate the influence of headgear on auditory perception. The HRTF sets will be publicly available in the SOFA format under a Creative Commons CC BY-SA 4.0 license

    Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019

    Get PDF
    International audienc

    Design Considerations When Accelerating an FPGA-Based Digital Microphone Array for Sound-Source Localization

    Get PDF
    The use of microphone arrays for sound-source localization is a well-researched topic. The response of such sensor arrays is dependent on the quantity of microphones operating on the array. A higher number of microphones, however, increase the computational demand, making real-time response challenging. In this paper, we present a Filter-and-Sum based architecture and several acceleration techniques to provide accurate sound-source localization in real-time. Experiments demonstrate how an accurate sound-source localization is obtained in a couple of milliseconds, independently of the number of microphones. Finally, we also propose different strategies to further accelerate the sound-source localization while offering increased angular resolution

    Development and Human Factors Evaluation of a Portable Auditory Localization Acclimation Training System

    Get PDF
    Auditory situation awareness (ASA) is essential for safety and survivability in military operations where many of the hazards are not immediately visible. Unfortunately, the Hearing Protection Devices (HPDs) required to operate in these environments can impede auditory localization performance. Promisingly, recent studies have exhibited the plasticity of the human auditory system by demonstrating that training can improve auditory localization ability while wearing HPDs, including military Tactical Communications and Protective Systems (TCAPS). As a result, the U.S. military identified the need for a portable system capable of imparting auditory localization acquisition skills at similar levels to those demonstrated in laboratory environments. The purpose of this investigation was to develop and validate a Portable Auditory Localization Acclimation Training (PALAT) system equipped with an improved training protocol against a proven laboratory grade system referred to as the DRILCOM system and subsequently evaluate the transfer-of-training benefit in a field environment. In Phase I, a systems decision process was used to develop a prototype PALAT system consisting of an expandable frame housing 32-loudspeakers operated by a user-controlled tablet computer capable of reproducing acoustically accurate localization cues similar to the DRILCOM system. Phase II used a within-subjects human factors experiment to validate whether the PALAT system could impart similar auditory localization training benefits as the DRILCOM system. Results showed no significant difference between the two localization training systems at each stage of training or in training rates for the open ear and with two TCAPS devices. The PALAT system also demonstrated the ability to detect differences in localization accuracy between listening conditions in the same manner as the DRILCOM system. Participant ratings indicated no perceived difference in localization training benefit but significantly preferred the PALAT system user interface which was specifically designed to improve usability features to meet requirements of a user operable system. The Phase III investigation evaluated the transfer-of-training benefit imparted by the PALAT system using a broadband stimulus to a field environment using gunshot stimulus. Training under the open ear and in-the-ear TCAPS resulted in significant differences between the trained and untrained groups from in-office pretest to in-field posttest

    Development and Human Factors Evaluation of a Portable Auditory Localization Acclimation Training System

    Get PDF
    Auditory situation awareness (ASA) is essential for safety and survivability in military operations where many of the hazards are not immediately visible. Unfortunately, the Hearing Protection Devices (HPDs) required to operate in these environments can impede auditory localization performance. Promisingly, recent studies have exhibited the plasticity of the human auditory system by demonstrating that training can improve auditory localization ability while wearing HPDs, including military Tactical Communications and Protective Systems (TCAPS). As a result, the U.S. military identified the need for a portable system capable of imparting auditory localization acquisition skills at similar levels to those demonstrated in laboratory environments. The purpose of this investigation was to develop and validate a Portable Auditory Localization Acclimation Training (PALAT) system equipped with an improved training protocol against a proven laboratory grade system referred to as the DRILCOM system and subsequently evaluate the transfer-of-training benefit in a field environment. In Phase I, a systems decision process was used to develop a prototype PALAT system consisting of an expandable frame housing 32-loudspeakers operated by a user-controlled tablet computer capable of reproducing acoustically accurate localization cues similar to the DRILCOM system. Phase II used a within-subjects human factors experiment to validate whether the PALAT system could impart similar auditory localization training benefits as the DRILCOM system. Results showed no significant difference between the two localization training systems at each stage of training or in training rates for the open ear and with two TCAPS devices. The PALAT system also demonstrated the ability to detect differences in localization accuracy between listening conditions in the same manner as the DRILCOM system. Participant ratings indicated no perceived difference in localization training benefit but significantly preferred the PALAT system user interface which was specifically designed to improve usability features to meet requirements of a user operable system. The Phase III investigation evaluated the transfer-of-training benefit imparted by the PALAT system using a broadband stimulus to a field environment using gunshot stimulus. Training under the open ear and in-the-ear TCAPS resulted in significant differences between the trained and untrained groups from in-office pretest to in-field posttest
    corecore