1,174 research outputs found

    An adaptive appearance-based map for long-term topological localization of mobile robots

    Get PDF
    This work considers a mobile service robot which uses an appearance-based representation of its workplace as a map, where the current view and the map are used to estimate the current position in the environment. Due to the nature of real-world environments such as houses and offices, where the appearance keeps changing, the internal representation may become out of date after some time. To solve this problem the robot needs to be able to adapt its internal representation continually to the changes in the environment. This paper presents a method for creating an adaptive map for long-term appearance-based localization of a mobile robot using long-term and short-term memory concepts, with omni-directional vision as the external sensor

    Map Building and Monte Carlo Localization Using Global Appearance of Omnidirectional Images

    Get PDF
    In this paper we deal with the problem of map building and localization of a mobile robot in an environment using the information provided by an omnidirectional vision sensor that is mounted on the robot. Our main objective consists of studying the feasibility of the techniques based in the global appearance of a set of omnidirectional images captured by this vision sensor to solve this problem. First, we study how to describe globally the visual information so that it represents correctly locations and the geometrical relationships between these locations. Then, we integrate this information using an approach based on a spring-mass-damper model, to create a topological map of the environment. Once the map is built, we propose the use of a Monte Carlo localization approach to estimate the most probable pose of the vision system and its trajectory within the map. We perform a comparison in terms of computational cost and error in localization. The experimental results we present have been obtained with real indoor omnidirectional images

    Localization of a mobile autonomous robot based on image analysis

    Get PDF
    This paper introduces an innovative method to solve the problem of self localization of a mobile autonomous robot, and in particular a case study is carried out for robot localization in a RoboCup field environment. The approach here described is completely different from other methods currently used in RoboCup, since it is only based on the use of images and does not involve the use of techniques like Monte Carlo or other probabilistic approaches. This method is simple, acceptably efficient for the purpose it was created, and uses a relatively low computational time to calculate.Fundação para a Ciência e Tecnologia (FCT) - projecto POSI/ROBO/43892/200

    Improving Omnidirectional Camera-Based Robot Localization Through Self-Supervised Learning

    Get PDF
    Autonomous agents in any environment require accurate and reliable position and motion estimation to complete their required tasks. Many different sensor modalities have been utilized for this task such as GPS, ultra-wide band, visual simultaneous localization and mapping (SLAM), and light detection and ranging (LiDAR) SLAM. Many of the traditional positioning systems do not take advantage of the recent advances in the machine learning field. In this work, an omnidirectional camera position estimation system relying primarily on a learned model is presented. The positioning system benefits from the wide field of view provided by an omnidirectional camera. Recent developments in the self-supervised learning field for generating useful features from unlabeled data are also assessed. A novel radial patch pretext task for omnidirectional images is presented in this work. The resulting implementation will be a robot localization and tracking algorithm that can be adapted to a variety of environments such as warehouses and college campuses. Further experiments with additional types of sensors including 3D LiDAR, 60 GHz wireless, and Ultra-Wideband localization systems utilizing machine learning are also explored. A fused learned localization model utilizing multiple sensor modalities is evaluated in comparison to individual sensor models

    Localization of a mobile autonomous robot based on image analysis

    Get PDF
    This paper introduces an innovative method to solve the problem of self localization of a mobile autonomous robot, and in particular a case study is carried out for robot localization in a RoboCup field environment. The approach here described is completely different from other methods currently used in RoboCup, since it is only based on the use of images and does not involve the use of techniques like Monte Carlo or other probabilistic approaches. This method is simple, acceptably efficient for the purpose it was created, and uses a relatively low computational time to calculate.Fundação para a Ciência e a Tecnologia (FCT) - POSI/ROBO/43892/200

    Object Position Estimation based on Dual Sight Perspective Configuration

    Get PDF
    Development of the coordination system requires the dataset because the dataset could provide information around the system that the coordination system can use to make decisions. Therefore, the capability to process and display data-related positions of objects around the robots is necessary. This paper provides a method to predict an object’s position. This method is based on the Indoor Positioning System (IPS) idea and object position estimation with the multi-camera system (i.e., stereo vision). This method needs two input data to estimate the ball position: the input image and the robot’s relative position. The approach adopts simple and easy calculation technics: trigonometry, angle rotations, and linear function. This method was tested on a ROS and Gazebo simulation platform. The experimental result shows that this configuration could estimate the object’s position with Mean Squared Error was 0.383 meters.  Besides, R squared distance calibration value is 0.9932, which implies that this system worked very well at estimating an object’s position.Development of the coordination system requires the dataset because the dataset could provide information around the system that the coordination system can use to make decisions. Therefore, the capability to process and display data-related positions of objects around the robots is necessary. This paper provides a method to predict an object’s position. This method is based on the Indoor Positioning System (IPS) idea and object position estimation with the multi-camera system (i.e., stereo vision). This method needs two input data to estimate the ball position: the input image and the robot’s relative position. The approach adopts simple and easy calculation technics: trigonometry, angle rotations, and linear function. This method was tested on a ROS and Gazebo simulation platform. The experimental result shows that this configuration could estimate the object’s position with Mean Squared Error was 0.383 meters.  Besides, R squared distance calibration value is 0.9932, which implies that this system worked very well at estimating an object’s position

    Long-term experiments with an adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground
    corecore