5,160 research outputs found

    A Robust Zero-Calibration RF-based Localization System for Realistic Environments

    Full text link
    Due to the noisy indoor radio propagation channel, Radio Frequency (RF)-based location determination systems usually require a tedious calibration phase to construct an RF fingerprint of the area of interest. This fingerprint varies with the used mobile device, changes of the transmit power of smart access points (APs), and dynamic changes in the environment; requiring re-calibration of the area of interest; which reduces the technology ease of use. In this paper, we present IncVoronoi: a novel system that can provide zero-calibration accurate RF-based indoor localization that works in realistic environments. The basic idea is that the relative relation between the received signal strength from two APs at a certain location reflects the relative distance from this location to the respective APs. Building on this, IncVoronoi incrementally reduces the user ambiguity region based on refining the Voronoi tessellation of the area of interest. IncVoronoi also includes a number of modules to efficiently run in realtime as well as to handle practical deployment issues including the noisy wireless environment, obstacles in the environment, heterogeneous devices hardware, and smart APs. We have deployed IncVoronoi on different Android phones using the iBeacons technology in a university campus. Evaluation of IncVoronoi with a side-by-side comparison with traditional fingerprinting techniques shows that it can achieve a consistent median accuracy of 2.8m under different scenarios with a low beacon density of one beacon every 44m2. Compared to fingerprinting techniques, whose accuracy degrades by at least 156%, this accuracy comes with no training overhead and is robust to the different user devices, different transmit powers, and over temporal changes in the environment. This highlights the promise of IncVoronoi as a next generation indoor localization system.Comment: 9 pages, 13 figures, published in SECON 201

    Acoustic Space Learning for Sound Source Separation and Localization on Binaural Manifolds

    Get PDF
    In this paper we address the problems of modeling the acoustic space generated by a full-spectrum sound source and of using the learned model for the localization and separation of multiple sources that simultaneously emit sparse-spectrum sounds. We lay theoretical and methodological grounds in order to introduce the binaural manifold paradigm. We perform an in-depth study of the latent low-dimensional structure of the high-dimensional interaural spectral data, based on a corpus recorded with a human-like audiomotor robot head. A non-linear dimensionality reduction technique is used to show that these data lie on a two-dimensional (2D) smooth manifold parameterized by the motor states of the listener, or equivalently, the sound source directions. We propose a probabilistic piecewise affine mapping model (PPAM) specifically designed to deal with high-dimensional data exhibiting an intrinsic piecewise linear structure. We derive a closed-form expectation-maximization (EM) procedure for estimating the model parameters, followed by Bayes inversion for obtaining the full posterior density function of a sound source direction. We extend this solution to deal with missing data and redundancy in real world spectrograms, and hence for 2D localization of natural sound sources such as speech. We further generalize the model to the challenging case of multiple sound sources and we propose a variational EM framework. The associated algorithm, referred to as variational EM for source separation and localization (VESSL) yields a Bayesian estimation of the 2D locations and time-frequency masks of all the sources. Comparisons of the proposed approach with several existing methods reveal that the combination of acoustic-space learning with Bayesian inference enables our method to outperform state-of-the-art methods.Comment: 19 pages, 9 figures, 3 table

    Conceptual spatial representations for indoor mobile robots

    Get PDF
    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following findings in cognitive psychology, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system

    A Review of pedestrian indoor positioning systems for mass market applications

    Get PDF
    In the last decade, the interest in Indoor Location Based Services (ILBS) has increased stimulating the development of Indoor Positioning Systems (IPS). In particular, ILBS look for positioning systems that can be applied anywhere in the world for millions of users, that is, there is a need for developing IPS for mass market applications. Those systems must provide accurate position estimations with minimum infrastructure cost and easy scalability to different environments. This survey overviews the current state of the art of IPSs and classifies them in terms of the infrastructure and methodology employed. Finally, each group is reviewed analysing its advantages and disadvantages and its applicability to mass market applications

    Closed-loop Bayesian Semantic Data Fusion for Collaborative Human-Autonomy Target Search

    Full text link
    In search applications, autonomous unmanned vehicles must be able to efficiently reacquire and localize mobile targets that can remain out of view for long periods of time in large spaces. As such, all available information sources must be actively leveraged -- including imprecise but readily available semantic observations provided by humans. To achieve this, this work develops and validates a novel collaborative human-machine sensing solution for dynamic target search. Our approach uses continuous partially observable Markov decision process (CPOMDP) planning to generate vehicle trajectories that optimally exploit imperfect detection data from onboard sensors, as well as semantic natural language observations that can be specifically requested from human sensors. The key innovation is a scalable hierarchical Gaussian mixture model formulation for efficiently solving CPOMDPs with semantic observations in continuous dynamic state spaces. The approach is demonstrated and validated with a real human-robot team engaged in dynamic indoor target search and capture scenarios on a custom testbed.Comment: Final version accepted and submitted to 2018 FUSION Conference (Cambridge, UK, July 2018
    corecore