19 research outputs found

    Efficiently learning metric and topological maps with autonomous service robots

    Get PDF
    Models of the environment are needed for a wide range of robotic applications, from search and rescue to automated vacuum cleaning. Learning maps has therefore been a major research focus in the robotics community over the last decades. In general, one distinguishes between metric and topological maps. Metric maps model the environment based on grids or geometric representations whereas topological maps model the structure of the environment using a graph. The contribution of this paper is an approach that learns a metric as well as a topological map based on laser range data obtained with a mobile robot. Our approach consists of two steps. First, the robot solves the simultaneous localization and mapping problem using an efficient probabilistic filtering technique. In a second step, it acquires semantic information about the environment using machine learning techniques. This semantic information allows the robot to distinguish between different types of places like, e. g., corridors or rooms. This enables the robot to construct annotated metric as well as topological maps of the environment. All techniques have been implemented and thoroughly tested using real mobile robot in a variety of environments

    Semantic labeling of places

    Get PDF
    Indoor environments can typically be divided into places with different functionalities like corridors, kitchens, offices, or seminar rooms. We believe that such semantic information enables a mobile robot to more efficiently accomplish a variety of tasks such as human-robot interaction, path-planning, or localization. In this paper, we propose an approach to classify places in indoor environments into different categories. Our approach uses AdaBoost to boost simple features extracted from vision and laser range data. Furthermore,we apply a Hidden Markov Model to take spatial dependencies between robot poses into account and to increase the robustness of the classification. Our technique has been implemented and tested on real robots as well as in simulation. Experiments presented in this paper demonstrate that our approach can be utilized to robustly classify places into semantic categories

    Efficient exploration of unknown indoor environments using a team of mobile robots

    Get PDF
    Whenever multiple robots have to solve a common task, they need to coordinate their actions to carry out the task efficiently and to avoid interferences between individual robots. This is especially the case when considering the problem of exploring an unknown environment with a team of mobile robots. To achieve efficient terrain coverage with the sensors of the robots, one first needs to identify unknown areas in the environment. Second, one has to assign target locations to the individual robots so that they gather new and relevant information about the environment with their sensors. This assignment should lead to a distribution of the robots over the environment in a way that they avoid redundant work and do not interfere with each other by, for example, blocking their paths. In this paper, we address the problem of efficiently coordinating a large team of mobile robots. To better distribute the robots over the environment and to avoid redundant work, we take into account the type of place a potential target is located in (e.g., a corridor or a room). This knowledge allows us to improve the distribution of robots over the environment compared to approaches lacking this capability. To autonomously determine the type of a place, we apply a classifier learned using the AdaBoost algorithm. The resulting classifier takes laser range data as input and is able to classify the current location with high accuracy. We additionally use a hidden Markov model to consider the spatial dependencies between nearby locations. Our approach to incorporate the information about the type of places in the assignment process has been implemented and tested in different environments. The experiments illustrate that our system effectively distributes the robots over the environment and allows them to accomplish their mission faster compared to approaches that ignore the place labels

    Self-Motivated, Task-Independent Reinforcement Learning for Robots

    Get PDF
    This paper describes a method for designing robots to learn self-motivated behaviors rather than externally specified be- haviors. Self-motivation is viewed as an emergent property arising from two competing pressures: the need to accu- rately predict the environment while simultaneously wanting to seek out novelty in the environment. The robot’s inter- nal prediction error is used to generate a reinforcement signal that pushes the robot to focus on areas of high error or nov- elty. A set of experiments are performed on a simulated robot to demonstrate the feasibility of this approach. The simulated robot is based directly on an existing platform and uses pixe- lated blob vision as its primary sensor

    Self-Motivated, Task-Independent Reinforcement Learning for Robots

    Get PDF
    This paper describes a method for designing robots to learn self-motivated behaviors rather than externally specified be- haviors. Self-motivation is viewed as an emergent property arising from two competing pressures: the need to accu- rately predict the environment while simultaneously wanting to seek out novelty in the environment. The robot’s inter- nal prediction error is used to generate a reinforcement signal that pushes the robot to focus on areas of high error or nov- elty. A set of experiments are performed on a simulated robot to demonstrate the feasibility of this approach. The simulated robot is based directly on an existing platform and uses pixe- lated blob vision as its primary sensor

    Inference In The Space Of Topological Maps: An MCMC-based Approach

    Get PDF
    ©2004 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 28 September-2 October 2004, Sendai, Japan.DOI: 10.1109/IROS.2004.1389611While probabilistic techniques have been considered extensively in the context of metric maps, no general purpose probabilistic methods exist for topological maps. We present the concept of Probabilistic Topological Maps (PTMs), a sample-based representation that approximates the posterior distribution over topologies given the available sensor measurements. The PTM is obtained through the use of MCMC-based Bayesian inference over the space of all possible topologies. It is shown that the space of all topologies is equivalent to the space of set partitions of all available measurements. While the space of possible topologies is intractably large, our use of Markov chain Monte Carlo sampling to infer the approximate histograms overcomes the combinatorial nature of this space and provides a general solution to the correspondence problem in the context of topological mapping. We present experimental results that validate our technique and generate good maps even when using only odometry as the sensor measurements

    Lifelong topological visual navigation

    Full text link
    La possibilité pour un robot de naviguer en utilisant uniquement la vision est attrayante en raison de sa simplicité. Les approches de navigation traditionnelles basées sur la vision nécessitent une étape préalable de construction de carte qui est ardue et sujette à l'échec, ou ne peuvent que suivre exactement des trajectoires précédemment exécutées. Les nouvelles techniques de navigation visuelle basées sur l'apprentissage réduisent la dépendance à l'égard d'une carte et apprennent plutôt directement des politiques de navigation à partir des images. Il existe actuellement deux paradigmes dominants : les approches de bout en bout qui renoncent entièrement à la représentation explicite de la carte, et les approches topologiques qui préservent toujours une certaine connectivité de l'espace. Cependant, alors que les méthodes de bout en bout ont tendance à éprouver des difficultés dans les tâches de navigation sur de longues distances, les solutions basées sur les cartes topologiques sont sujettes à des défaillances dues à des arêtes erronées dans le graphe. Dans ce document, nous proposons une méthode de navigation visuelle topologique basée sur l'apprentissage, avec des stratégies de mise à jour du graphe, qui améliore les performances de navigation sur toute la durée de vie du robot. Nous nous inspirons des algorithmes de planification basés sur l'échantillonnage pour construire des graphes topologiques basés sur l'image, ce qui permet d'obtenir des graphes plus épars et d'améliorer les performances de navigation par rapport aux méthodes de base. En outre, contrairement aux contrôleurs qui apprennent à partir d'environnements d'entraînement fixes, nous montrons que notre modèle peut être affiné à l'aide d'un ensemble de données relativement petit provenant de l'environnement réel où le robot est déployé. Enfin, nous démontrons la forte performance du système dans des expériences de navigation de robots dans le monde réel.The ability for a robot to navigate using vision only is appealing due to its simplicity. Traditional vision-based navigation approaches require a prior map-building step that was arduous and prone to failure, or could only exactly follow previously executed trajectories. Newer learning-based visual navigation techniques reduce the reliance on a map and instead directly learn policies from image inputs for navigation. There are currently two prevalent paradigms: end-to-end approaches forego the explicit map representation entirely, and topological approaches which still preserve some loose connectivity of the space. However, while end-to-end methods tend to struggle in long-distance navigation tasks, topological map-based solutions are prone to failure due to spurious edges in the graph. In this work, we propose a learning-based topological visual navigation method with graph update strategies that improves lifelong navigation performance over time. We take inspiration from sampling-based planning algorithms to build image-based topological graphs, resulting in sparser graphs with higher navigation performance compared to baseline methods. Also, unlike controllers that learn from fixed training environments, we show that our model can be finetuned using a relatively small dataset from the real-world environment where the robot is deployed. Finally, we demonstrate strong system performance in real world robot navigation experiments

    Data Driven MCMC for Appearance-based Topological Mapping

    Get PDF
    Probabilistic techniques have become the mainstay of robotic mapping, particularly for generating metric maps. In previous work, we have presented a hitherto nonexistent general purpose probabilistic framework for dealing with topological mapping. This involves the creation of Probabilistic Topological Maps (PTMs), a sample-based representation that approximates the posterior distribution over topologies given available sensor measurements. The PTM is inferred using Markov Chain Monte Carlo (MCMC) that overcomes the combinatorial nature of the problem. In this paper, we address the problem of integrating appearance measurements into the PTM framework. Specifically, we consider appearance measurements in the form of panoramic images obtained from a camera rig mounted on a robot. We also propose improvements to the efficiency of the MCMC algorithm through the use of an intelligent data-driven proposal distribution. We present experiments t hat illustrate the robustness and wide applicability of our algorithm

    Towards Robust Place Recognition for Robot Localization

    Get PDF
    Localization and context interpretation are two key competences for mobile robot systems. Visual place recognition, as opposed to purely geometrical models, holds promise of higher flexibility and association of semantics to the model. Ideally, a place recognition algorithm should be robust to dynamic changes and it should perform consistently when recognizing a room (for instance a corridor) in different geographical locations. Also, it should be able to categorize places, a crucial capability for transfer of knowledge and continuous learning. In order to test the suitability of visual recognition algorithms for these tasks, this paper presents a new database, acquired in three different labs across Europe. It contains image sequences of several rooms under dynamic changes, acquired at the same time with a perspective and omnidirectional camera, mounted on a socket. We assess this new database with an appearance based algorithm that combines local features with support vector machines through an ad-hoc kernel. Results show the effectiveness of the approach and the value of the databas
    corecore