38 research outputs found

    Building, registrating and fusing noisy visual maps

    Get PDF
    International audienceThis paper deals with the problem of building three-dimen sional descriptions (we call them visual maps) of the environ ment of a mobile robot using passive vision. These maps are local (i.e., attached to specific frames of reference). Since noise is present, they incorporate information about the ge ometry of the environment and about the uncertainty of the parameters defining the geometry. This geometric uncertainty is directly related to its source (i.e., sensor uncertainty). We show how visual maps corresponding to different positions of the robot can be registered to compute a better estimate of its displacement between the various viewpoint positions, as suming an otherwise static environment. We use these esti mates to fuse the different visual maps and reduce locally the uncertainty of the geometric primitives which have found correspondents in other maps. We propose to perform these three tasks (building, registrating, and fusing visual maps) within the general framework of extended Kalman filtering, which allows efficient combination of measurements in the presence of noise

    Performance improvement in VSLAM using stabilized feature points

    Get PDF
    Simultaneous localization and mapping (SLAM) is the main prerequisite for the autonomy of a mobile robot. In this paper, we present a novel method that enhances the consistency of the map using stabilized corner features. The proposed method integrates template matching based video stabilization and Harris corner detector. Extracting Harris corner features from stabilized video consistently increases the accuracy of the localization. Data coming from a video camera and odometry are fused in an Extended Kalman Filter (EKF) to determine the pose of the robot and build the map of the environment. Simulation results validate the performance improvement obtained by the proposed technique

    SLAM-based 3D outdoor reconstructions from lidar data

    Get PDF
    The use of depth (RGBD) cameras to reconstruct large outdoor environments is not feasible due to lighting conditions and low depth range. LIDAR sensors can be used instead. Most state of the art SLAM methods are devoted to indoor environments and depth (RGBD) cameras. We have adapted two SLAM systems to work with LIDAR data. We have compared the systems for LIDAR and RGBD data by performing quantitative evaluations. Results show that the best method for LIDAR data is RTAB-Map with a clear difference. Additionally, RTAB-Map has been used to create 3D reconstructions with and without photometry from a visible color camera. This proves the potential of LIDAR sensors for the reconstruction of outdoor environments for immersion or audiovisual production applicationsPeer ReviewedPostprint (author's final draft

    Integrating Multiple Uncertain Views of a Static Scene Acquired by an Agile Camera System

    Get PDF
    This paper addresses the problem of merging multiple views of a static scene into a common coordinate frame, explicitly considering uncertainty. It assumes that a static world is observed by an agile vision system, whose movements are known with a limited precision, and whose observations are inaccurate and incomplete. It concentrates on acquiring uncertain three-dimensional information from multiple views, rather than on modeling or representing the information at higher levels of abstraction. Two particular problems receive attention: identifying the transformation between two viewing positions; and understanding how errors and uncertainties propagate as a result of applying the transformation. The first is solved by identifying the forward kinematics of the agile camera system. The second is solved by first treating a measurement of camera position and orientation as a uniformly distributed random vector whose component variances are related to the resolution of the encoding potentiometers, then treating an object position measurement as a normally distributed random vector whose component variances are experimentally derived, and finally determining the uncertainty of the merged points as functions of these variances

    Real-Time Vision-Based Robot Localization

    Get PDF
    In this article we describe an algorithm for robot localization using visual landmarks. This algorithm determines both the correspondence between observed landmarks (in this case vertical edges in the environment) and a pre-loaded map, and the location of the robot from those correspondences. The primary advantages of this algorithm are its use of a single geometric tolerance to describe observation error, its ability to recognize ambiguous sets of correspondences, its ability to compute bounds on the error in localization, and fast performance. The current version of the algorithm has been implemented and tested on a mobile robot system. In several hundred trials the algorithm has never failed, and computes location accurate to within a centimeter in less than half a second

    Cooperative simultaneous localization and mapping framework

    Get PDF
    This research work is a contribution to develop a framework for cooperative simultaneous localization and mapping with multiple heterogeneous mobile robots. The presented research work contributes in two aspects of a team of heterogeneous mobile robots for cooperative map building. First it provides a mathematical framework for cooperative localization and geometric features based map building. Secondly it proposes a software framework for controlling, configuring and managing a team of heterogeneous mobile robots. Since mapping and pose estimation are very closely related to each other, therefore, two novel sensor data fusion techniques are also presented, furthermore, various state of the art localization and mapping techniques and mobile robot software frameworks are discussed for an overview of the current development in this research area. The mathematical cooperative SLAM formulation probabilistically solves the problem of estimating the robots state and the environment features using Kalman filter. The software framework is an effort toward the ongoing standardization process of the cooperative mobile robotics systems. To enhance the efficiency of a cooperative mobile robot system the proposed software framework addresses various issues such as different communication protocol structure for mobile robots, different sets of sensors for mobile robots, sensor data organization from different robots, monitoring and controlling robots from a single interface. The present work can be applied to number of applications in various domains where a priori map of the environment is not available and it is not possible to use global positioning devices to find the accurate position of the mobile robot. Therefore the mobile robot(s) has to rely on building the map of its environment and using the same map to find its position and orientation relative to the environment. The exemplary areas for applying the proposed SLAM technique are Indoor environments such as warehouse management, factory floors for parts assembly line, mapping abandoned tunnels, disaster struck environment which are missing maps, under see pipeline inspection, ocean surveying, military applications, planet exploration and many others. These applications are some of many and are only limited by the imagination.Diese Forschungsarbeit ist ein Beitrag zur Entwicklung eines Framework für kooperatives SLAM mit heterogenen, mobilen Robotern. Die präsentierte Forschungsarbeit trägt in zwei Aspekten in einem Team von heterogenen, mobilen Robotern bei. Erstens stellt es einen mathematischen Framework für kooperative Lokalisierung und geometrisch basierende Kartengenerierung bereit. Zweitens schlägt es einen Softwareframework zur Steuerung, Konfiguration und Management einer Gruppe von heterogenen mobilen Robotern vor. Da Kartenerstellung und Poseschätzung miteinander stark verbunden sind, werden zwei neuartige Techniken zur Sensordatenfusion präsentiert. Weiterhin werden zum Stand der Technik verschiedene Techniken zur Lokalisierung und Kartengenerierung sowie Softwareframeworks für die mobile Robotik diskutiert um einen Überblick über die aktuelle Entwicklung in diesem Forschungsbereich zu geben. Die mathematische Formulierung des SLAM Problems löst das Problem der Roboterzustandsschätzung und der Umgebungmerkmale durch Benutzung eines Kalman filters. Der Softwareframework ist ein Beitrag zum anhaltenden Standardisierungsprozess von kooperativen, mobilen Robotern. Um die Effektivität eines kooperativen mobilen Robotersystems zu verbessern enthält der vorgeschlagene Softwareframework die Möglichkeit die Kommunikationsprotokolle flexibel zu ändern, mit verschiedenen Sensoren zu arbeiten sowie die Möglichkeit die Sensordaten verschieden zu organisieren und verschiedene Roboter von einem Interface aus zu steuern. Die präsentierte Arbeit kann in einer Vielzahl von Applikationen in verschiedenen Domänen benutzt werden, wo eine Karte der Umgebung nicht vorhanden ist und es nicht möglich ist GPS Daten zur präzisen Lokalisierung eines mobilen Roboters zu nutzen. Daher müssen die mobilen Roboter sich auf die selbsterstellte Karte verlassen und die selbe Karte zur Bestimmung von Position und Orientierung relativ zur Umgebung verwenden. Die exemplarischen Anwendungen der vorgeschlagenen SLAM Technik sind Innenraumumgebungen wie Lagermanagement, Fabrikgebäude mit Produktionsstätten, verlassene Tunnel, Katastrophengebiete ohne aktuelle Karte, Inspektion von Unterseepipelines, Ozeanvermessung, Militäranwendungen, Planetenerforschung und viele andere. Diese Anwendungen sind einige von vielen und sind nur durch die Vorstellungskraft limitiert
    corecore