5,838 research outputs found

    RF Localization in Indoor Environment

    Get PDF
    In this paper indoor localization system based on the RF power measurements of the Received Signal Strength (RSS) in WLAN environment is presented. Today, the most viable solution for localization is the RSS fingerprinting based approach, where in order to establish a relationship between RSS values and location, different machine learning approaches are used. The advantage of this approach based on WLAN technology is that it does not need new infrastructure (it reuses already and widely deployed equipment), and the RSS measurement is part of the normal operating mode of wireless equipment. We derive the Cramer-Rao Lower Bound (CRLB) of localization accuracy for RSS measurements. In analysis of the bound we give insight in localization performance and deployment issues of a localization system, which could help designing an efficient localization system. To compare different machine learning approaches we developed a localization system based on an artificial neural network, k-nearest neighbors, probabilistic method based on the Gaussian kernel and the histogram method. We tested the developed system in real world WLAN indoor environment, where realistic RSS measurements were collected. Experimental comparison of the results has been investigated and average location estimation error of around 2 meters was obtained

    Multi-Object Classification and Unsupervised Scene Understanding Using Deep Learning Features and Latent Tree Probabilistic Models

    Get PDF
    Deep learning has shown state-of-art classification performance on datasets such as ImageNet, which contain a single object in each image. However, multi-object classification is far more challenging. We present a unified framework which leverages the strengths of multiple machine learning methods, viz deep learning, probabilistic models and kernel methods to obtain state-of-art performance on Microsoft COCO, consisting of non-iconic images. We incorporate contextual information in natural images through a conditional latent tree probabilistic model (CLTM), where the object co-occurrences are conditioned on the extracted fc7 features from pre-trained Imagenet CNN as input. We learn the CLTM tree structure using conditional pairwise probabilities for object co-occurrences, estimated through kernel methods, and we learn its node and edge potentials by training a new 3-layer neural network, which takes fc7 features as input. Object classification is carried out via inference on the learnt conditional tree model, and we obtain significant gain in precision-recall and F-measures on MS-COCO, especially for difficult object categories. Moreover, the latent variables in the CLTM capture scene information: the images with top activations for a latent node have common themes such as being a grasslands or a food scene, and on on. In addition, we show that a simple k-means clustering of the inferred latent nodes alone significantly improves scene classification performance on the MIT-Indoor dataset, without the need for any retraining, and without using scene labels during training. Thus, we present a unified framework for multi-object classification and unsupervised scene understanding

    Fireground location understanding by semantic linking of visual objects and building information models

    Get PDF
    This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi -)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding

    Towards Odor-Sensitive Mobile Robots

    Get PDF
    J. Monroy, J. Gonzalez-Jimenez, "Towards Odor-Sensitive Mobile Robots", Electronic Nose Technologies and Advances in Machine Olfaction, IGI Global, pp. 244--263, 2018, doi:10.4018/978-1-5225-3862-2.ch012 Versión preprint, con permiso del editorOut of all the components of a mobile robot, its sensorial system is undoubtedly among the most critical ones when operating in real environments. Until now, these sensorial systems mostly relied on range sensors (laser scanner, sonar, active triangulation) and cameras. While electronic noses have barely been employed, they can provide a complementary sensory information, vital for some applications, as with humans. This chapter analyzes the motivation of providing a robot with gas-sensing capabilities and also reviews some of the hurdles that are preventing smell from achieving the importance of other sensing modalities in robotics. The achievements made so far are reviewed to illustrate the current status on the three main fields within robotics olfaction: the classification of volatile substances, the spatial estimation of the gas dispersion from sparse measurements, and the localization of the gas source within a known environment

    Joint received signal strength, angle-of-arrival, and time-of-flight positioning

    Get PDF
    This paper presents a software positioning framework that is able to jointly use measured values of three parameters: the received signal strength, the angle-of-arrival, and the time-of-flight of the wireless signals. Based on experimentally determined measurement accuracies of these three parameters, results of a realistic simulation scenario are presented. It is shown that for the given configuration, angle-of-arrival and received signal strength measurements benefit from a hybrid system that combines both. Thanks to their higher accuracy, time-of-flight systems perform significantly better, and obtain less added value from a combination with the other two parameters
    corecore