59 research outputs found

    Visual based localization of a legged robot with a topological representation

    Get PDF
    In this chapter we have presented the performance of a localization method of legged AIBO robots in not-engineered environments, using vision as an active input sensor. This method is based on classic markovian approach but it has not been previously used with legged robots in indoor office environments. We have shown that the robot is able to localize itself in real time even inenvironments with noise produced by the human activity in a real office. It deals with uncertainty in its actions and uses perceived natural landmarks of the environment as the main sensor inpu

    Targeting a Practical Approach for Robot Vision with Ensembles of Visual Features

    Get PDF
    We approach the task of topological localization in mobile robotics without using a temporal continuity of the sequences of images. The provided information about the environment is contained in images taken with a perspective colour camera mounted on a robot platform. The main contributions of this work are quantifiable examinations of a wide variety of different global and local invariant features, and different distance measures. We focus on finding the optimal set of features and a deepened analysis was carried out. The characteristics of different features were analysed using widely known dissimilarity measures and graphical views of the overall performances. The quality of the acquired configurations is also tested in the localization stage by means of location recognition in the Robot Vision task, by participating at the ImageCLEF International Evaluation Campaign. The long term goal of this project is to develop integrated, stand alone capabilities for real-time topological localization in varying illumination conditions and over longer routes

    A Real-Time Robust SLAM for Large-Scale Outdoor Environments

    No full text
    International audienceThe problem of simultaneous localization and mapping (SLAM) is still a challenging issue in large-scale unstructured dynamic environments. In this paper, we introduce a real-time reliable SLAM solution with the capability of closing the loop using exclusive laser data. In our algorithm, a universal motion model is presented for initial pose estimation. To further refine robot pose, we propose a novel progressive refining strategy using a pyramid grid-map based on Maximum Likelihood mapping framework. We demonstrate the success of our algorithm in experimental result by building a consistent map along a 1.2 km loop trajectory (an area about 100,000 m2) in an increasingly unstructured outdoor environment, with people and other clutter in real time

    Semantic labeling of places using information extracted from laser and vision sensor data

    Get PDF
    Indoor environments can typically be divided into places with different functionalities like corridors, kitchens, offices, or seminar rooms. The ability to learn such semantic categories from sensor data enables a mobile robot to extend the representation of the environment facilitating the interaction withhumans. As an example, natural language terms like corridor or room can be used to communicate the position of the robot in a map in a more intuitive way. In this work, we firrst propose an approach based on supervised learning to classify the pose of a mobile robot into semantic classes. Our method uses AdaBoost to boost simple features extracted from range data and vision into a strong classifier. We present two main applications of this approach. Firstly, we show how our approach can be utilized by a moving robot for an online classification of the poses traversed along its path using a hidden Markov model. Secondly, we introduce an approach to learn topological maps from geometric maps by applying our semantic classification procedure in combination with a probabilistic relaxation procedure. We finally show how to apply associative Markov networks (AMNs) together with AdaBoost for classifying complete geometric maps. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various indoor environments

    Using a mobile robot to test a theory of cognitive mapping

    Get PDF
    This paper describes using a mobile robot, equipped with some sonar sensors and an odometer, to test navigation through the use of a cognitive map. The robot explores an office environment, computes a cognitive map, which is a network of ASRs [36, 35], and attempts to find its way home. Ten trials were conducted and the robot found its way home each time. From four random positions in two trials, the robot estimated the home position relative to its current position reasonably accurately. Our robot does not solve the simultaneous localization and mapping problem and the map computed is fuzzy and inaccurate with much of the details missing. In each homeward journey, it computes a new cognitive map of the same part of the environment, as seen from the perspective of the homeward journey. We show how the robot uses distance information from both maps to find its way home. © 2007 Springer-Verlag Berlin Heidelberg
    • …
    corecore