7,755 research outputs found

    Positional estimation techniques for an autonomous mobile robot

    Get PDF
    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented

    Positional Encoding by Robots with Non-Rigid Movements

    Full text link
    Consider a set of autonomous computational entities, called \emph{robots}, operating inside a polygonal enclosure (possibly with holes), that have to perform some collaborative tasks. The boundary of the polygon obstructs both visibility and mobility of a robot. Since the polygon is initially unknown to the robots, the natural approach is to first explore and construct a map of the polygon. For this, the robots need an unlimited amount of persistent memory to store the snapshots taken from different points inside the polygon. However, it has been shown by Di Luna et al. [DISC 2017] that map construction can be done even by oblivious robots by employing a positional encoding strategy where a robot carefully positions itself inside the polygon to encode information in the binary representation of its distance from the closest polygon vertex. Of course, to execute this strategy, it is crucial for the robots to make accurate movements. In this paper, we address the question whether this technique can be implemented even when the movements of the robots are unpredictable in the sense that the robot can be stopped by the adversary during its movement before reaching its destination. However, there exists a constant δ>0\delta > 0, unknown to the robot, such that the robot can always reach its destination if it has to move by no more than δ\delta amount. This model is known in literature as \emph{non-rigid} movement. We give a partial answer to the question in the affirmative by presenting a map construction algorithm for robots with non-rigid movement, but having O(1)O(1) bits of persistent memory and ability to make circular moves

    Topomap: Topological Mapping and Navigation Based on Visual SLAM Maps

    Full text link
    Visual robot navigation within large-scale, semi-structured environments deals with various challenges such as computation intensive path planning algorithms or insufficient knowledge about traversable spaces. Moreover, many state-of-the-art navigation approaches only operate locally instead of gaining a more conceptual understanding of the planning objective. This limits the complexity of tasks a robot can accomplish and makes it harder to deal with uncertainties that are present in the context of real-time robotics applications. In this work, we present Topomap, a framework which simplifies the navigation task by providing a map to the robot which is tailored for path planning use. This novel approach transforms a sparse feature-based map from a visual Simultaneous Localization And Mapping (SLAM) system into a three-dimensional topological map. This is done in two steps. First, we extract occupancy information directly from the noisy sparse point cloud. Then, we create a set of convex free-space clusters, which are the vertices of the topological map. We show that this representation improves the efficiency of global planning, and we provide a complete derivation of our algorithm. Planning experiments on real world datasets demonstrate that we achieve similar performance as RRT* with significantly lower computation times and storage requirements. Finally, we test our algorithm on a mobile robotic platform to prove its advantages.Comment: 8 page

    View-Invariant Regions and Mobile Robot Self-Localization

    Get PDF
    This paper addresses the problem of mobile robot self-localization given a polygonal map and a set of observed edge segments. The standard approach to this problem uses interpretation tree search with pruning heuristics to match observed edges to map edges. Our approach introduces a preprocessing step in which the map is decomposed into 'view-invariant regions' (VIRs). The VIR decomposition captures information about map edge visibility, and can be used for a variety of robot navigation tasks. Basing self-localization search on VIRs greatly reduces the branching factor of the search tree and thereby simplifies the search task. In this paper we define the VIR decomposition and give algorithms for its computation and for self-localization search. We present results of simulations comparing standard and VIR-based search, and discuss the application of the VIR decomposition to other problems in robot navigation

    Sensory processing and world modeling for an active ranging device

    Get PDF
    In this project, we studied world modeling and sensory processing for laser range data. World Model data representation and operation were defined. Sensory processing algorithms for point processing and linear feature detection were designed and implemented. The interface between world modeling and sensory processing in the Servo and Primitive levels was investigated and implemented. In the primitive level, linear features detectors for edges were also implemented, analyzed and compared. The existing world model representations is surveyed. Also presented is the design and implementation of the Y-frame model, a hierarchical world model. The interfaces between the world model module and the sensory processing module are discussed as well as the linear feature detectors that were designed and implemented

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio

    A Robot Navigation Algorithm for Moving Obstacles

    Get PDF
    In recent years, considerable progress has been made towards the development of intelligent autonomous mobile robots which can perform a wide variety of tasks. Although the capabilities of these robots vary significantly, each must have the ability to navigate within its environment from a starting location to a goal without experiencing collisions with obstacles in the process - a capability commonly referred to as robot navigation . Numerous algorithms for robot navigation have been developed which allow the robot to operate in static environments. However, little work has been accomplished in the development of algorithms which allow the robot to navigate in a dynamic environment. This thesis presents a mathematically-based navigation algorithm for a robot operating in a continuous-time environment inhabited by moving obstacles whose trajectories and velocities can be detected. In this methodology, the obstacles are represented as sheared cylinders to depict the areas swept out by the obstacle disks of influence over time. The robot is represented by the cone of positions it can reach by traveling at a constant speed in any direction. The methodology utilizes a three-dimensional navigation planning approach in which the search points, or tangent points, are the points in time at which the robot tangentially meets the obstacles. These tangent points are determined by calculating the intersection curves between the robot and the obstacles, and then using the first derivative of the intersection curves to make the tangent selections. Paths are created as sequences of these tangent points leading from the robot starting location to the goal, and are searched using the A* strategy, with a heuristic of the Euclidean distance from the tangent point to the goal. The main contribution of this thesis is the development of a methodology which produces optimal tangent paths to the goal for a dynamic robot environment. This feature is significant, since no other algorithm located in the literature survey as background to this thesis has shown the ability to produce paths with optimal properties

    Istraživanje i modeliranje nepoznatog poligonalnog prostora zasnovano na nesigurnim podacima udaljenosti

    Get PDF
    We consider problem of exploration and mapping of unknown indoor environments using laser range finder. We assume a setup with a resolved localization problem and known uncertainty sensor models. Most exploration algorithms are based on detection of a boundary between explored and unexplored regions. They are, however, not efficient in practice due to uncertainties in measurement, localization and map building. The exploration and mapping algorithm is proposed that extends Ekman’s exploration algorithm by removing rigid constraints on the range sensor and robot localization. The proposed algorithm includes line extraction algorithm developed by Pfister, which incorporates noise models of the range sensor and robot’s pose uncertainty. A line representation of the range data is used for creating polygon that represents explored region from each measurement pose. The polygon edges that do not correspond to real environmental features are candidates for a new measurement pose. A general polygon clipping algorithm is used to obtain the total explored region as the union of polygons from different measurement poses. The proposed algorithm is tested and compared to the Ekman’s algorithm by simulations and experimentally on a Pioneer 3DX mobile robot equipped with SICK LMS-200 laser range finder.Razmatramo problem istraživanja i izgradnje karte nepoznatog unutarnjeg prostora koristeći laserski sensor udaljenosti. Pretpostavljamo riješenu lokalizaciju robota i poznati model nesigurnosti senzora. Većna se algoritama istraživanja zasniva na otkrivanju granica istraženog i neistraženog područja. Međutim, u praksi nisu učinkoviti zbog nesigurnosti mjerenja, lokalizacije i izgradnje karte. Razvijen je algoritam istraživanja i izgradnje karte koji proširuje Ekmanov algoritam uklanjanjem strogih ograničenja na senzor udaljenosti i lokalizaciju robota. Razvijeni algoritam uključuje algoritam izdvajanja linijskih segmenata prema Pfisteru, koji uzima u obzir utjecaje zašumljenosti senzora i nesigurnosti položaja mobilnog robota. Linijska reprezentacija podataka udaljenosti koristi se za stvaranje poligona koji predstavlja istraženo područje iz svakog mjernog položaja. Bridovi poligona koji se ne podudaraju sa stvarnim značajkama prostora su kandidati za novi mjerni položaj. Algoritam općenitog isijecanja poligona korišten je za dobivanje ukupnog istraženog područja kao unija poligona iz različitih mjernih položaja. Razvijeni algoritam testiran je i uspoređen s izvornim Ekmanovim algoritmom simulacijski i eksperimentalno na mobilnom robotu Pioneer 3DX opremljenim laserskim senzorom udaljenosti SICK LMS-200
    corecore