1,843 research outputs found
Recommended from our members
The influences of environmental conditions on source localisation using a single vertical array and their exploitation through ground effect inversion
The performance of microphone arrays outdoors is influenced by the environmental conditions. Numerical simulations indicate that, while horizontal arrays are hardly affected, direction-of-arrival (DOA) estimation with vertical arrays becomes biased in presence of ground reflections and sound speed gradients. Turbulence leads to a huge variability in the estimates by reducing the ground effect. Ground effect can be exploited by combining classical source localization with an appropriate propagation model (ground effect inversion). Not only does this allow the source elevation and range to be determined with a single vertical array but also it allows separation of sources which can no longer be distinguished by far field localization methods. Furthermore, simulations provide detail of the achievable spatial resolution depending on frequency range, array size and localization algorithm and show a clear advantage of broadband processing. Outdoor measurements with one or two sources confirm the results of the numerical simulations
CABE : a cloud-based acoustic beamforming emulator for FPGA-based sound source localization
Microphone arrays are gaining in popularity thanks to the availability of low-cost microphones. Applications including sonar, binaural hearing aid devices, acoustic indoor localization techniques and speech recognition are proposed by several research groups and companies. In most of the available implementations, the microphones utilized are assumed to offer an ideal response in a given frequency domain. Several toolboxes and software can be used to obtain a theoretical response of a microphone array with a given beamforming algorithm. However, a tool facilitating the design of a microphone array taking into account the non-ideal characteristics could not be found. Moreover, generating packages facilitating the implementation on Field Programmable Gate Arrays has, to our knowledge, not been carried out yet. Visualizing the responses in 2D and 3D also poses an engineering challenge. To alleviate these shortcomings, a scalable Cloud-based Acoustic Beamforming Emulator (CABE) is proposed. The non-ideal characteristics of microphones are considered during the computations and results are validated with acoustic data captured from microphones. It is also possible to generate hardware description language packages containing delay tables facilitating the implementation of Delay-and-Sum beamformers in embedded hardware. Truncation error analysis can also be carried out for fixed-point signal processing. The effects of disabling a given group of microphones within the microphone array can also be calculated. Results and packages can be visualized with a dedicated client application. Users can create and configure several parameters of an emulation, including sound source placement, the shape of the microphone array and the required signal processing flow. Depending on the user configuration, 2D and 3D graphs showing the beamforming results, waterfall diagrams and performance metrics can be generated by the client application. The emulations are also validated with captured data from existing microphone arrays.</jats:p
Localization and Rendering of Sound Sources in Acoustic Fields
DisertaÄŤnĂ práce se zabĂ˝vá lokalizacĂ zdrojĹŻ zvuku a akustickĂ˝m zoomem. HlavnĂm cĂlem tĂ©to práce je navrhnout systĂ©m s akustickĂ˝m zoomem, kterĂ˝ pĹ™iblĂžà zvuk jednoho mluvÄŤĂho mezi skupinou mluvÄŤĂch, a to i kdyĹľ mluvĂ souÄŤasnÄ›. Tento systĂ©m je kompatibilnĂ s technikou prostorovĂ©ho zvuku. HlavnĂ pĹ™Ănosy disertaÄŤnĂ práce jsou následujĂcĂ: 1. Návrh metody pro odhad vĂce smÄ›rĹŻ pĹ™icházejĂcĂho zvuku. 2. Návrh metody pro akustickĂ© zoomovánĂ pomocĂ DirAC. 3. Návrh kombinovanĂ©ho systĂ©mu pomocĂ pĹ™edchozĂch krokĹŻ, kterĂ˝ mĹŻĹľe bĂ˝t pouĹľit v telekonferencĂch.This doctoral thesis deals with sound source localization and acoustic zooming. The primary goal of this dissertation is to design an acoustic zooming system, which can zoom the sound of one speaker among multiple speakers even when they speak simultaneously. The system is compatible with surround sound techniques. In particular, the main contributions of the doctoral thesis are as follows: 1. Design of a method for multiple sound directions estimations. 2. Proposing a method for acoustic zooming using DirAC. 3. Design a combined system using the previous mentioned steps, which can be used in teleconferencing.
Reflection-Aware Sound Source Localization
We present a novel, reflection-aware method for 3D sound localization in
indoor environments. Unlike prior approaches, which are mainly based on
continuous sound signals from a stationary source, our formulation is designed
to localize the position instantaneously from signals within a single frame. We
consider direct sound and indirect sound signals that reach the microphones
after reflecting off surfaces such as ceilings or walls. We then generate and
trace direct and reflected acoustic paths using inverse acoustic ray tracing
and utilize these paths with Monte Carlo localization to estimate a 3D sound
source position. We have implemented our method on a robot with a cube-shaped
microphone array and tested it against different settings with continuous and
intermittent sound signals with a stationary or a mobile source. Across
different settings, our approach can localize the sound with an average
distance error of 0.8m tested in a room of 7m by 7m area with 3m height,
including a mobile and non-line-of-sight sound source. We also reveal that the
modeling of indirect rays increases the localization accuracy by 40% compared
to only using direct acoustic rays.Comment: Submitted to ICRA 2018. The working video is available at
(https://youtu.be/TkQ36lMEC-M
Pyroomacoustics: A Python package for audio room simulations and array processing algorithms
We present pyroomacoustics, a software package aimed at the rapid development
and testing of audio array processing algorithms. The content of the package
can be divided into three main components: an intuitive Python object-oriented
interface to quickly construct different simulation scenarios involving
multiple sound sources and microphones in 2D and 3D rooms; a fast C
implementation of the image source model for general polyhedral rooms to
efficiently generate room impulse responses and simulate the propagation
between sources and receivers; and finally, reference implementations of
popular algorithms for beamforming, direction finding, and adaptive filtering.
Together, they form a package with the potential to speed up the time to market
of new algorithms by significantly reducing the implementation overhead in the
performance evaluation step.Comment: 5 pages, 5 figures, describes a software packag
Supervised Control of a Flying Performing Robot using its Intrinsic Sound
We present the current results of our ongoing research in achieving efficient control of a flying robot for a wide variety of possible applications. A lightweight small indoor helicopter has been equipped with an embedded system and relatively simple sensors to achieve autonomous stable flight. The controllers have been tuned using genetic algorithms to further enhance flight stability. A number of additional sensors would need to be attached to the helicopter to enable it to sense more of its environment such as its current location or the location of obstacles like the walls of the room it is flying in. The lightweight nature of the helicopter very much restricts the amount of sensors that can be attached to it. We propose utilising the intrinsic sound signatures of the helicopter to locate it and to extract features about its current state, using another supervising robot. The analysis of this information is then sent back to the helicopter using an uplink to enable the helicopter to further stabilise its flight and correct its position and flight path without the need for additional sensors
- …