175 research outputs found

    Semantic information for robot navigation: a survey

    Get PDF
    There is a growing trend in robotics for implementing behavioural mechanisms based on human psychology, such as the processes associated with thinking. Semantic knowledge has opened new paths in robot navigation, allowing a higher level of abstraction in the representation of information. In contrast with the early years, when navigation relied on geometric navigators that interpreted the environment as a series of accessible areas or later developments that led to the use of graph theory, semantic information has moved robot navigation one step further. This work presents a survey on the concepts, methodologies and techniques that allow including semantic information in robot navigation systems. The techniques involved have to deal with a range of tasks from modelling the environment and building a semantic map, to including methods to learn new concepts and the representation of the knowledge acquired, in many cases through interaction with users. As understanding the environment is essential to achieve high-level navigation, this paper reviews techniques for acquisition of semantic information, paying attention to the two main groups: human-assisted and autonomous techniques. Some state-of-the-art semantic knowledge representations are also studied, including ontologies, cognitive maps and semantic maps. All of this leads to a recent concept, semantic navigation, which integrates the previous topics to generate high-level navigation systems able to deal with real-world complex situationsThe research leading to these results has received funding from HEROITEA: Heterogeneous 480 Intelligent Multi-Robot Team for Assistance of Elderly People (RTI2018-095599-B-C21), funded by Spanish 481 Ministerio de Economía y Competitividad. The research leading to this work was also supported project "Robots sociales para estimulacón física, cognitiva y afectiva de mayores"; funded by the Spanish State Research Agency under grant 2019/00428/001. It is also funded by WASP-AI Sweden; and by Spanish project Robotic-Based Well-Being Monitoring and Coaching for Elderly People during Daily Life Activities (RTI2018-095599-A-C22)

    Memorable Maps: A Framework for Re-defining Places in Visual Place Recognition

    Get PDF
    This paper presents a cognition-inspired agnostic framework for building a map for Visual Place Recognition. This framework draws inspiration from human-memorability, utilizes the traditional image entropy concept and computes the static content in an image; thereby presenting a tri-folded criteria to assess the `memorability' of an image for visual place recognition. A dataset namely `ESSEX3IN1' is created, composed of highly confusing images from indoor, outdoor and natural scenes for analysis. When used in conjunction with state-of-the-art visual place recognition methods, the proposed framework provides significant performance boost to these techniques, as evidenced by results on ESSEX3IN1 and other public datasets

    On Consistent Mapping in Distributed Environments using Mobile Sensors

    Get PDF
    The problem of robotic mapping, also known as simultaneous localization and mapping (SLAM), by a mobile agent for large distributed environments is addressed in this dissertation. This has sometimes been referred to as the holy grail in the robotics community, and is the stepping stone towards making a robot completely autonomous. A hybrid solution to the SLAM problem is proposed based on "first localize then map" principle. It is provably consistent and has great potential for real time application. It provides significant improvements over state-of-the-art Bayesian approaches by reducing the computational complexity of the SLAM problem without sacrificing consistency. The localization is achieved using a feature based extended Kalman filter (EKF) which utilizes a sparse set of reliable features. The common issues of data association, loop closure and computational cost of EKF based methods are kept tractable owing to the sparsity of the feature set. A novel frequentist mapping technique is proposed for estimating the dense part of the environment using the sensor observations. Given the pose estimate of the robot, this technique can consistently map the surrounding environment. The technique has linear time complexity in map components and for the case of bounded sensor noise, it is shown that the frequentist mapping technique has constant time complexity which makes it capable of estimating large distributed environments in real time. The frequentist mapping technique is a stochastic approximation algorithm and is shown to converge to the true map probabilities almost surely. The Hybrid SLAM software is developed in the C-language and is capable of handling real experimental data as well as simulations. The Hybrid SLAM technique is shown to perform well in simulations, experiments with an iRobot Create, and on standard datasets from the Robotics Data Set Repository, known as Radish. It is demonstrated that the Hybrid SLAM technique can successfully map large complex data sets in an order of magnitude less time than the time taken by the robot to acquire the data. It has low system requirements and has the potential to run on-board a robot to estimate large distributed environments in real time

    Cartographie hybride pour des environnements de grande taille

    Get PDF
    In this thesis, a novel vision based hybrid mapping framework which exploits metric, topological and semantic information is presented. We aim to obtain better computational efficiency than pure metrical mapping techniques, better accuracy as well as usability for robot guidance compared to the topological mapping. A crucial step of any mapping system is the loop closure detection which is the ability of knowing if the robot is revisiting a previously mapped area. Therefore, we first propose a hierarchical loop closure detection framework which also constructs the global topological structure of our hybrid map. Using this loop closure detection module, a hybrid mapping framework is proposed in two step. The first step can be understood as a topo-metric map with nodes corresponding to certain regions in the environment. Each node in turn is made up of a set of images acquired in that region. These maps are further augmented with metric information at those nodes which correspond to image sub-sequences acquired while the robot is revisiting the previously mapped area. The second step augments this model by using road semantics. A Conditional Random Field based classification on the metric reconstruction is used to semantically label the local robot path (road in our case) as straight, curved or junctions. Metric information of regions with curved roads and junctions is retained while that of other regions is discarded in the final map. Loop closure is performed only on junctions thereby increasing the efficiency and also accuracy of the map. By incorporating all of these new algorithms, the hybrid framework presented can perform as a robust, scalable SLAM approach, or act as a main part of a navigation tool which could be used on a mobile robot or an autonomous car in outdoor urban environments. Experimental results obtained on public datasets acquired in challenging urban environments are provided to demonstrate our approach.Dans cette thèse, nous présentons une nouvelle méthode de cartographie visuelle hybride qui exploite des informations métriques, topologiques et sémantiques. Notre but est de réduire le coût calculatoire par rapport à des techniques de cartographie purement métriques. Comparé à de la cartographie topologique, nous voulons plus de précision ainsi que la possibilité d’utiliser la carte pour le guidage de robots. Cette méthode hybride de construction de carte comprend deux étapes. La première étape peut être vue comme une carte topo-métrique avec des nœuds correspondants à certaines régions de l’environnement. Ces cartes sont ensuite complétées avec des données métriques aux nœuds correspondant à des sous-séquences d’images acquises quand le robot revenait dans des zones préalablement visitées. La deuxième étape augmente ce modèle en ajoutant des informations sémantiques. Une classification est effectuée sur la base des informations métriques en utilisant des champs de Markov conditionnels (CRF) pour donner un label sémantique à la trajectoire locale du robot (la route dans notre cas) qui peut être "doit", "virage" ou "intersection". L’information métrique des secteurs de route en virage ou en intersection est conservée alors que la métrique des lignes droites est effacée de la carte finale. La fermeture de boucle n’est réalisée que dans les intersections ce qui accroît l’efficacité du calcul et la précision de la carte. En intégrant tous ces nouveaux algorithmes, cette méthode hybride est robuste et peut être étendue à des environnements de grande taille. Elle peut être utilisée pour la navigation d’un robot mobile ou d’un véhicule autonome en environnement urbain. Nous présentons des résultats expérimentaux obtenus sur des jeux de données publics acquis en milieu urbain pour démontrer l’efficacité de l’approche proposée

    Contributions to Localization, Mapping and Navigation in Mobile Robotics

    Get PDF
    This thesis focuses on the problem of enabling mobile robots to autonomously build world models of their environments and to employ them as a reference to self–localization and navigation. For mobile robots to become truly autonomous and useful, they must be able of reliably moving towards the locations required by their tasks. This simple requirement gives raise to countless problems that have populated research in the mobile robotics community for the last two decades. Among these issues, two of the most relevant are: (i) secure autonomous navigation, that is, moving to a target avoiding collisions and (ii) the employment of an adequate world model for robot self-referencing within the environment and also for locating places of interest. The present thesis introduces several contributions to both research fields. Among the contributions of this thesis we find a novel approach to extend SLAM to large-scale scenarios by means of a seamless integration of geometric and topological map building in a probabilistic framework that estimates the hybrid metric-topological (HMT) state space of the robot path. The proposed framework unifies the research areas of topological mapping, reasoning on topological maps and metric SLAM, providing also a natural integration of SLAM and the “robot awakening” problem. Other contributions of this thesis cover a wide variety of topics, such as optimal estimation in particle filters, a new probabilistic observation model for laser scanners based on consensus theory, a novel measure of the uncertainty in grid mapping, an efficient method for range-only SLAM, a grounded method for partitioning large maps into submaps, a multi-hypotheses approach to grid map matching, and a mathematical framework for extending simple obstacle avoidance methods to realistic robots

    VPR-Bench: An Open-Source Visual Place Recognition Evaluation Framework with Quantifiable Viewpoint and Appearance Change

    Get PDF
    Visual place recognition (VPR) is the process of recognising a previously visited place using visual information, often under varying appearance conditions and viewpoint changes and with computational constraints. VPR is related to the concepts of localisation, loop closure, image retrieval and is a critical component of many autonomous navigation systems ranging from autonomous vehicles to drones and computer vision systems. While the concept of place recognition has been around for many years, VPR research has grown rapidly as a field over the past decade due to improving camera hardware and its potential for deep learning-based techniques, and has become a widely studied topic in both the computer vision and robotics communities. This growth however has led to fragmentation and a lack of standardisation in the field, especially concerning performance evaluation. Moreover, the notion of viewpoint and illumination invariance of VPR techniques has largely been assessed qualitatively and hence ambiguously in the past. In this paper, we address these gaps through a new comprehensive open-source framework for assessing the performance of VPR techniques, dubbed “VPR-Bench”. VPR-Bench (Open-sourced at: https://github.com/MubarizZaffar/VPR-Bench) introduces two much-needed capabilities for VPR researchers: firstly, it contains a benchmark of 12 fully-integrated datasets and 10 VPR techniques, and secondly, it integrates a comprehensive variation-quantified dataset for quantifying viewpoint and illumination invariance. We apply and analyse popular evaluation metrics for VPR from both the computer vision and robotics communities, and discuss how these different metrics complement and/or replace each other, depending upon the underlying applications and system requirements. Our analysis reveals that no universal SOTA VPR technique exists, since: (a) state-of-the-art (SOTA) performance is achieved by 8 out of the 10 techniques on at least one dataset, (b) SOTA technique in one community does not necessarily yield SOTA performance in the other given the differences in datasets and metrics. Furthermore, we identify key open challenges since: (c) all 10 techniques suffer greatly in perceptually-aliased and less-structured environments, (d) all techniques suffer from viewpoint variance where lateral change has less effect than 3D change, and (e) directional illumination change has more adverse effects on matching confidence than uniform illumination change. We also present detailed meta-analyses regarding the roles of varying ground-truths, platforms, application requirements and technique parameters. Finally, VPR-Bench provides a unified implementation to deploy these VPR techniques, metrics and datasets, and is extensible through templates
    corecore