1,843 research outputs found

    CABE : a cloud-based acoustic beamforming emulator for FPGA-based sound source localization

    Get PDF
    Microphone arrays are gaining in popularity thanks to the availability of low-cost microphones. Applications including sonar, binaural hearing aid devices, acoustic indoor localization techniques and speech recognition are proposed by several research groups and companies. In most of the available implementations, the microphones utilized are assumed to offer an ideal response in a given frequency domain. Several toolboxes and software can be used to obtain a theoretical response of a microphone array with a given beamforming algorithm. However, a tool facilitating the design of a microphone array taking into account the non-ideal characteristics could not be found. Moreover, generating packages facilitating the implementation on Field Programmable Gate Arrays has, to our knowledge, not been carried out yet. Visualizing the responses in 2D and 3D also poses an engineering challenge. To alleviate these shortcomings, a scalable Cloud-based Acoustic Beamforming Emulator (CABE) is proposed. The non-ideal characteristics of microphones are considered during the computations and results are validated with acoustic data captured from microphones. It is also possible to generate hardware description language packages containing delay tables facilitating the implementation of Delay-and-Sum beamformers in embedded hardware. Truncation error analysis can also be carried out for fixed-point signal processing. The effects of disabling a given group of microphones within the microphone array can also be calculated. Results and packages can be visualized with a dedicated client application. Users can create and configure several parameters of an emulation, including sound source placement, the shape of the microphone array and the required signal processing flow. Depending on the user configuration, 2D and 3D graphs showing the beamforming results, waterfall diagrams and performance metrics can be generated by the client application. The emulations are also validated with captured data from existing microphone arrays.</jats:p

    Localization and Rendering of Sound Sources in Acoustic Fields

    Get PDF
    Disertační práce se zabývá lokalizací zdrojů zvuku a akustickým zoomem. Hlavním cílem této práce je navrhnout systém s akustickým zoomem, který přiblíží zvuk jednoho mluvčího mezi skupinou mluvčích, a to i když mluví současně. Tento systém je kompatibilní s technikou prostorového zvuku. Hlavní přínosy disertační práce jsou následující: 1. Návrh metody pro odhad více směrů přicházejícího zvuku. 2. Návrh metody pro akustické zoomování pomocí DirAC. 3. Návrh kombinovaného systému pomocí předchozích kroků, který může být použit v telekonferencích.This doctoral thesis deals with sound source localization and acoustic zooming. The primary goal of this dissertation is to design an acoustic zooming system, which can zoom the sound of one speaker among multiple speakers even when they speak simultaneously. The system is compatible with surround sound techniques. In particular, the main contributions of the doctoral thesis are as follows: 1. Design of a method for multiple sound directions estimations. 2. Proposing a method for acoustic zooming using DirAC. 3. Design a combined system using the previous mentioned steps, which can be used in teleconferencing.

    Reflection-Aware Sound Source Localization

    Full text link
    We present a novel, reflection-aware method for 3D sound localization in indoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using inverse acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. We have implemented our method on a robot with a cube-shaped microphone array and tested it against different settings with continuous and intermittent sound signals with a stationary or a mobile source. Across different settings, our approach can localize the sound with an average distance error of 0.8m tested in a room of 7m by 7m area with 3m height, including a mobile and non-line-of-sight sound source. We also reveal that the modeling of indirect rays increases the localization accuracy by 40% compared to only using direct acoustic rays.Comment: Submitted to ICRA 2018. The working video is available at (https://youtu.be/TkQ36lMEC-M

    Pyroomacoustics: A Python package for audio room simulations and array processing algorithms

    Full text link
    We present pyroomacoustics, a software package aimed at the rapid development and testing of audio array processing algorithms. The content of the package can be divided into three main components: an intuitive Python object-oriented interface to quickly construct different simulation scenarios involving multiple sound sources and microphones in 2D and 3D rooms; a fast C implementation of the image source model for general polyhedral rooms to efficiently generate room impulse responses and simulate the propagation between sources and receivers; and finally, reference implementations of popular algorithms for beamforming, direction finding, and adaptive filtering. Together, they form a package with the potential to speed up the time to market of new algorithms by significantly reducing the implementation overhead in the performance evaluation step.Comment: 5 pages, 5 figures, describes a software packag

    Supervised Control of a Flying Performing Robot using its Intrinsic Sound

    Get PDF
    We present the current results of our ongoing research in achieving efficient control of a flying robot for a wide variety of possible applications. A lightweight small indoor helicopter has been equipped with an embedded system and relatively simple sensors to achieve autonomous stable flight. The controllers have been tuned using genetic algorithms to further enhance flight stability. A number of additional sensors would need to be attached to the helicopter to enable it to sense more of its environment such as its current location or the location of obstacles like the walls of the room it is flying in. The lightweight nature of the helicopter very much restricts the amount of sensors that can be attached to it. We propose utilising the intrinsic sound signatures of the helicopter to locate it and to extract features about its current state, using another supervising robot. The analysis of this information is then sent back to the helicopter using an uplink to enable the helicopter to further stabilise its flight and correct its position and flight path without the need for additional sensors
    • …
    corecore