2,572 research outputs found

    Reflection-Aware Sound Source Localization

    Full text link
    We present a novel, reflection-aware method for 3D sound localization in indoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using inverse acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. We have implemented our method on a robot with a cube-shaped microphone array and tested it against different settings with continuous and intermittent sound signals with a stationary or a mobile source. Across different settings, our approach can localize the sound with an average distance error of 0.8m tested in a room of 7m by 7m area with 3m height, including a mobile and non-line-of-sight sound source. We also reveal that the modeling of indirect rays increases the localization accuracy by 40% compared to only using direct acoustic rays.Comment: Submitted to ICRA 2018. The working video is available at (https://youtu.be/TkQ36lMEC-M

    Perceptually Driven Interactive Sound Propagation for Virtual Environments

    Get PDF
    Sound simulation and rendering can significantly augment a user‘s sense of presence in virtual environments. Many techniques for sound propagation have been proposed that predict the behavior of sound as it interacts with the environment and is received by the user. At a broad level, the propagation algorithms can be classified into reverberation filters, geometric methods, and wave-based methods. In practice, heuristic methods based on reverberation filters are simple to implement and have a low computational overhead, while wave-based algorithms are limited to static scenes and involve extensive precomputation. However, relatively little work has been done on the psychoacoustic characterization of different propagation algorithms, and evaluating the relationship between scientific accuracy and perceptual benefits.In this dissertation, we present perceptual evaluations of sound propagation methods and their ability to model complex acoustic effects for virtual environments. Our results indicate that scientifically accurate methods for reverberation and diffraction do result in increased perceptual differentiation. Based on these evaluations, we present two novel hybrid sound propagation methods that combine the accuracy of wave-based methods with the speed of geometric methods for interactive sound propagation in dynamic scenes.Our first algorithm couples modal sound synthesis with geometric sound propagation using wave-based sound radiation to perform mode-aware sound propagation. We introduce diffraction kernels of rigid objects,which encapsulate the sound diffraction behaviors of individual objects in the free space and are then used to simulate plausible diffraction effects using an interactive path tracing algorithm. Finally, we present a novel perceptual driven metric that can be used to accelerate the computation of late reverberation to enable plausible simulation of reverberation with a low runtime overhead. We highlight the benefits of our novel propagation algorithms in different scenarios.Doctor of Philosoph

    High precision hybrid RF and ultrasonic chirp-based ranging for low-power IoT nodes

    Get PDF
    Hybrid acoustic-RF systems offer excellent ranging accuracy, yet they typically come at a power consumption that is too high to meet the energy constraints of mobile IoT nodes. We combine pulse compression and synchronized wake-ups to achieve a ranging solution that limits the active time of the nodes to 1 ms. Hence, an ultra low-power consumption of 9.015 µW for a single measurement is achieved. The operation time is estimated on 8.5 years on a CR2032 coin cell battery at a 1 Hz update rate, which is over 250 times larger than state-of-the-art RF-based positioning systems. Measurements based on a proof-of-concept hardware platform show median distance error values below 10 cm. Both simulations and measurements demonstrate that the accuracy is reduced at low signal-to-noise ratios and when reflections occur. We introduce three methods that enhance the distance measurements at a low extra processing power cost. Hence, we validate in realistic environments that the centimeter accuracy can be obtained within the energy budget of mobile devices and IoT nodes. The proposed hybrid signal ranging system can be extended to perform accurate, low-power indoor positioning

    Non-Line-of-Sight Passive Acoustic Localization Around Corners

    Full text link
    Non-line-of-sight (NLoS) imaging is an important challenge in many fields ranging from autonomous vehicles and smart cities to defense applications. Several recent works in optics and acoustics tackle the challenge of imaging targets hidden from view (e.g. placed around a corner) by measuring time-of-flight (ToF) information using active SONAR/LiDAR techniques, effectively mapping the Green functions (impulse responses) from several sources to an array of detectors. Here, leveraging passive correlations-based imaging techniques, we study the possibility of acoustic NLoS target localization around a corner without the use of controlled active sources. We demonstrate localization and tracking of a human subject hidden around the corner in a reverberating room, using Green functions retrieved from correlations of broadband noise in multiple detectors. Our results demonstrate that the controlled active sources can be replaced by passive detectors as long as a sufficiently broadband noise is present in the scene.Comment: 6 pages, 3 figure

    Real-time 3-D Mapping with Estimating Acoustic Materials

    Full text link
    This paper proposes a real-time system integrating an acoustic material estimation from visual appearance and an on-the-fly mapping in the 3-dimension. The proposed method estimates the acoustic materials of surroundings in indoor scenes and incorporates them to a 3-D occupancy map, as a robot moves around the environment. To estimate the acoustic material from the visual cue, we apply the state-of-the-art semantic segmentation CNN network based on the assumption that the visual appearance and the acoustic materials have a strong association. Furthermore, we introduce an update policy to handle the material estimations during the online mapping process. As a result, our environment map with acoustic material can be used for sound-related robotics applications, such as sound source localization taking into account various acoustic propagation (e.g., reflection)
    • …
    corecore