1,322 research outputs found

    Towards Informative Path Planning for Acoustic SLAM

    Get PDF
    Acoustic scene mapping is a challenging task as microphone arrays can often localize sound sources only in terms of their directions. Spatial diversity can be exploited constructively to infer source-sensor range when using microphone arrays installed on moving platforms, such as robots. As the absolute location of a moving robot is often unknown in practice, Acoustic Simultaneous Localization And Mapping (a-SLAM) is required in order to localize the moving robot’s positions and jointly map the sound sources. Using a novel a-SLAM approach, this paper investigates the impact of the choice of robot paths on source mapping accuracy. Simulation results demonstrate that a-SLAM performance can be improved by informatively planning robot paths

    Microphone array signal processing for robot audition

    Get PDF
    Robot audition for humanoid robots interacting naturally with humans in an unconstrained real-world environment is a hitherto unsolved challenge. The recorded microphone signals are usually distorted by background and interfering noise sources (speakers) as well as room reverberation. In addition, the movements of a robot and its actuators cause ego-noise which degrades the recorded signals significantly. The movement of the robot body and its head also complicates the detection and tracking of the desired, possibly moving, sound sources of interest. This paper presents an overview of the concepts in microphone array processing for robot audition and some recent achievements

    Audio Localization for Robots Using Parallel Cerebellar Models

    Get PDF
    © 2016 IEEE. A robot audio localization system is presented that combines the outputs of multiple adaptive filter models of the Cerebellum to calibrate a robot's audio map for various acoustic environments. The system is inspired by the MOdular Selection for Identification and Control (MOSAIC) framework. This study extends our previous work that used multiple cerebellar models to determine the acoustic environment in which a robot is operating. Here, the system selects a set of models and combines their outputs in proportion to the likelihood that each is responsible for calibrating the audio map as a robot moves between different acoustic environments or contexts. The system was able to select an appropriate set of models, achieving a performance better than that of a single model trained in all contexts, including novel contexts, as well as a baseline generalized cross correlation with phase transform sound source localization algorithm. The main contribution of this letter is the combination of multiple calibrators to allow a robot operating in the field to adapt to a range of different acoustic environments. The best performances were observed where the presence of a Responsibility Predictor was simulated

    Acoustic simultaneous localization and mapping (A-SLAM) of a moving microphone array and its surrounding speakers

    Get PDF
    Acoustic scene mapping creates a representation of positions of audio sources such as talkers within the surrounding environment of a microphone array. By allowing the array to move, the acoustic scene can be explored in order to improve the map. Furthermore, the spatial diversity of the kinematic array allows for estimation of the source-sensor distance in scenarios where source directions of arrival are measured. As sound source localization is performed relative to the array position, mapping of acoustic sources requires knowledge of the absolute position of the microphone array in the room. If the array is moving, its absolute position is unknown in practice. Hence, Simultaneous Localization and Mapping (SLAM) is required in order to localize the microphone array position and map the surrounding sound sources. In realistic environments, microphone arrays receive a convolutive mixture of direct-path speech signals, noise and reflections due to reverberation. A key challenge of Acoustic SLAM (a-SLAM) is robustness against reverberant clutter measurements and missing source detections. This paper proposes a novel bearing-only a-SLAM approach using a Single-Cluster Probability Hypothesis Density filter. Results demonstrate convergence to accurate estimates of the array trajectory and source positions

    Acoustic Echo Estimation using the model-based approach with Application to Spatial Map Construction in Robotics

    Get PDF

    Sound Localization for Robot Navigation

    Get PDF
    Non

    Acoustic SLAM

    Get PDF
    An algorithm is presented that enables devices equipped with microphones, such as robots, to move within their environment in order to explore, adapt to and interact with sound sources of interest. Acoustic scene mapping creates a 3D representation of the positional information of sound sources across time and space. In practice, positional source information is only provided by Direction-of-Arrival (DoA) estimates of the source directions; the source-sensor range is typically difficult to obtain. DoA estimates are also adversely affected by reverberation, noise, and interference, leading to errors in source location estimation and consequent false DoA estimates. Moroever, many acoustic sources, such as human talkers, are not continuously active, such that periods of inactivity lead to missing DoA estimates. Withal, the DoA estimates are specified relative to the observer's sensor location and orientation. Accurate positional information about the observer therefore is crucial. This paper proposes Acoustic Simultaneous Localization and Mapping (aSLAM), which uses acoustic signals to simultaneously map the 3D positions of multiple sound sources whilst passively localizing the observer within the scene map. The performance of aSLAM is analyzed and evaluated using a series of realistic simulations. Results are presented to show the impact of the observer motion and sound source localization accuracy

    Speaker Localization with Moving Microphone Arrays

    Get PDF
    Speaker localization algorithms often assume static location for all sensors. This assumption simplifies the models used, since all acoustic transfer functions are linear time invariant. In many applications this assumption is not valid. In this paper we address the localization challenge with moving microphone arrays. We propose two algorithms to find the speaker position. The first approach is a batch algorithm based on the maximum likelihood criterion, optimized via expectationmaximization iterations. The second approach is a particle filter for sequential Bayesian estimation. The performance of both approaches is evaluated and compared for simulated reverberant audio data from a microphone array with two sensors
    • …
    corecore