1,980 research outputs found

    Acoustic simultaneous localization and mapping (A-SLAM) of a moving microphone array and its surrounding speakers

    Get PDF
    Acoustic scene mapping creates a representation of positions of audio sources such as talkers within the surrounding environment of a microphone array. By allowing the array to move, the acoustic scene can be explored in order to improve the map. Furthermore, the spatial diversity of the kinematic array allows for estimation of the source-sensor distance in scenarios where source directions of arrival are measured. As sound source localization is performed relative to the array position, mapping of acoustic sources requires knowledge of the absolute position of the microphone array in the room. If the array is moving, its absolute position is unknown in practice. Hence, Simultaneous Localization and Mapping (SLAM) is required in order to localize the microphone array position and map the surrounding sound sources. In realistic environments, microphone arrays receive a convolutive mixture of direct-path speech signals, noise and reflections due to reverberation. A key challenge of Acoustic SLAM (a-SLAM) is robustness against reverberant clutter measurements and missing source detections. This paper proposes a novel bearing-only a-SLAM approach using a Single-Cluster Probability Hypothesis Density filter. Results demonstrate convergence to accurate estimates of the array trajectory and source positions

    Multisensor Poisson Multi-Bernoulli Filter for Joint Target-Sensor State Tracking

    Full text link
    In a typical multitarget tracking (MTT) scenario, the sensor state is either assumed known, or tracking is performed in the sensor's (relative) coordinate frame. This assumption does not hold when the sensor, e.g., an automotive radar, is mounted on a vehicle, and the target state should be represented in a global (absolute) coordinate frame. Then it is important to consider the uncertain location of the vehicle on which the sensor is mounted for MTT. In this paper, we present a multisensor low complexity Poisson multi-Bernoulli MTT filter, which jointly tracks the uncertain vehicle state and target states. Measurements collected by different sensors mounted on multiple vehicles with varying location uncertainty are incorporated sequentially based on the arrival of new sensor measurements. In doing so, targets observed from a sensor mounted on a well-localized vehicle reduce the state uncertainty of other poorly localized vehicles, provided that a common non-empty subset of targets is observed. A low complexity filter is obtained by approximations of the joint sensor-feature state density minimizing the Kullback-Leibler divergence (KLD). Results from synthetic as well as experimental measurement data, collected in a vehicle driving scenario, demonstrate the performance benefits of joint vehicle-target state tracking.Comment: 13 pages, 7 figure

    Technical Report: Cooperative Multi-Target Localization With Noisy Sensors

    Full text link
    This technical report is an extended version of the paper 'Cooperative Multi-Target Localization With Noisy Sensors' accepted to the 2013 IEEE International Conference on Robotics and Automation (ICRA). This paper addresses the task of searching for an unknown number of static targets within a known obstacle map using a team of mobile robots equipped with noisy, limited field-of-view sensors. Such sensors may fail to detect a subset of the visible targets or return false positive detections. These measurement sets are used to localize the targets using the Probability Hypothesis Density, or PHD, filter. Robots communicate with each other on a local peer-to-peer basis and with a server or the cloud via access points, exchanging measurements and poses to update their belief about the targets and plan future actions. The server provides a mechanism to collect and synthesize information from all robots and to share the global, albeit time-delayed, belief state to robots near access points. We design a decentralized control scheme that exploits this communication architecture and the PHD representation of the belief state. Specifically, robots move to maximize mutual information between the target set and measurements, both self-collected and those available by accessing the server, balancing local exploration with sharing knowledge across the team. Furthermore, robots coordinate their actions with other robots exploring the same local region of the environment.Comment: Extended version of paper accepted to 2013 IEEE International Conference on Robotics and Automation (ICRA

    Marker-Less Stage Drift Correction in Super-Resolution Microscopy Using the Single-Cluster PHD Filter

    Get PDF
    Fluorescence microscopy is a technique which allows the imaging of cellular and intracellular dynamics through the activation of fluorescent molecules attached to them. It is a very important technique because it can be used to analyze the behavior of intracellular processes in vivo in contrast to methods like electron microscopy. There are several challenges related to the extraction of meaningful information from images acquired from optical microscopes due to the low contrast between objects and background and the fact that point-like objects are observed as blurred spots due to the diffraction limit of the optical system. Another consideration is that for the study of intracellular dynamics, multiple particles must be tracked at the same time, which is a challenging task due to problems such as the presence of false positives and missed detections in the acquired data. Additionally, the objective of the microscope is not completely static with respect to the cover slip due to mechanical vibrations or thermal expansions which introduces bias in the measurements. In this paper, a Bayesian approach is used to simultaneously track the locations of objects with different motion behaviors and the stage drift using image data obtained from fluorescence microscopy experiments. Namely, detections are extracted from the acquired frames using image processing techniques, and then these detections are used to accurately estimate the particle positions and simultaneously correct the drift introduced by the motion of the sample stage. A single cluster Probability Hypothesis Density (PHD) filter with object classification is used for the estimation of the multiple target state assuming different motion behaviors. The detection and tracking methods are tested and their performance is evaluated on both simulated and real data

    Acoustic SLAM

    Get PDF
    An algorithm is presented that enables devices equipped with microphones, such as robots, to move within their environment in order to explore, adapt to and interact with sound sources of interest. Acoustic scene mapping creates a 3D representation of the positional information of sound sources across time and space. In practice, positional source information is only provided by Direction-of-Arrival (DoA) estimates of the source directions; the source-sensor range is typically difficult to obtain. DoA estimates are also adversely affected by reverberation, noise, and interference, leading to errors in source location estimation and consequent false DoA estimates. Moroever, many acoustic sources, such as human talkers, are not continuously active, such that periods of inactivity lead to missing DoA estimates. Withal, the DoA estimates are specified relative to the observer's sensor location and orientation. Accurate positional information about the observer therefore is crucial. This paper proposes Acoustic Simultaneous Localization and Mapping (aSLAM), which uses acoustic signals to simultaneously map the 3D positions of multiple sound sources whilst passively localizing the observer within the scene map. The performance of aSLAM is analyzed and evaluated using a series of realistic simulations. Results are presented to show the impact of the observer motion and sound source localization accuracy

    TOA-based indoor localization and tracking with inaccurate floor plan map via MRMSC-PHD filter

    Get PDF
    This paper proposes a novel indoor localization scheme to jointly track a mobile device (MD) and update an inaccurate floor plan map using the time-of-arrival measured at multiple reference devices (RDs). By modeling the floor plan map as a collection of map features, the map and MD position can be jointly estimated via a multi-RD single-cluster probability hypothesis density (MSC-PHD) filter. Conventional MSC-PHD filters assume that each map feature generates at most one measurement for each RD. If single reflections of the detected signal are considered as measurements generated by map features, then higher-order reflections, which also carry information on the MD and map features, must be treated as clutter. The proposed scheme incorporates multiple reflections by treating them as virtual single reflections reflected from inaccurate map features and traces them to the corresponding virtual RDs (VRDs), referred to as a multi-reflection-incorporating MSC-PHD (MRMSC-PHD) filter. The complexity of using multiple reflection paths arises from the inaccuracy of the VRD location due to inaccuracy in the map features. Numerical results show that these multiple reflection paths can be modeled statistically as a Gaussian distribution. A computationally tractable implementation combining a new greedy partitioning scheme and a particle-Gaussian mixture filter is presented. A novel mapping error metric is then proposed to evaluate the estimated map's accuracy for plane surfaces. Simulation and experimental results show that our proposed MRMSC-PHD filter outperforms the existing MSC-PHD filters by up to 95% in terms of average localization and by up to 90% in terms of mapping accuracy
    • …
    corecore