36 research outputs found

    Real-Time Vision-Based Robot Localization

    Get PDF
    In this article we describe an algorithm for robot localization using visual landmarks. This algorithm determines both the correspondence between observed landmarks (in this case vertical edges in the environment) and a pre-loaded map, and the location of the robot from those correspondences. The primary advantages of this algorithm are its use of a single geometric tolerance to describe observation error, its ability to recognize ambiguous sets of correspondences, its ability to compute bounds on the error in localization, and fast performance. The current version of the algorithm has been implemented and tested on a mobile robot system. In several hundred trials the algorithm has never failed, and computes location accurate to within a centimeter in less than half a second

    Real-time vision-based robot localization

    Full text link

    Loc-NeRF: Monte Carlo Localization using Neural Radiance Fields

    Full text link
    We present Loc-NeRF, a real-time vision-based robot localization approach that combines Monte Carlo localization and Neural Radiance Fields (NeRF). Our system uses a pre-trained NeRF model as the map of an environment and can localize itself in real-time using an RGB camera as the only exteroceptive sensor onboard the robot. While neural radiance fields have seen significant applications for visual rendering in computer vision and graphics, they have found limited use in robotics. Existing approaches for NeRF-based localization require both a good initial pose guess and significant computation, making them impractical for real-time robotics applications. By using Monte Carlo localization as a workhorse to estimate poses using a NeRF map model, Loc-NeRF is able to perform localization faster than the state of the art and without relying on an initial pose estimate. In addition to testing on synthetic data, we also run our system using real data collected by a Clearpath Jackal UGV and demonstrate for the first time the ability to perform real-time global localization with neural radiance fields. We make our code publicly available at https://github.com/MIT-SPARK/Loc-NeRF

    Vision-Based Localization Algorithm Based on Landmark Matching, Triangulation, Reconstruction, and Comparison

    No full text
    Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRC) global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data

    Perencanaan Rute Gerak Mobile Robot Berpenggerak Differensial Pada Medan Acak Menggunakan Algoritma A* Dikombinasikan Dengan Teknik Image Blurring

    Get PDF
    Pengembangan teknik otomasi pergerakan robot untuk dapat beroperasi di dunia nyata sudah menjadi bahan penelitian bagi pengembangan mobile robot di dunia saat ini. Untuk dapat mencapai suatu posisi yang diinginkan, mobile robot membutuhkan suatu sistem navigasi yang dapat mengarahkan mobile robot tersebut ke posisi yang diinginkan. Pada penelitian ini membahas tentang perencanaan rute (path planning) pada sebuah model BMP yang mengilustrasikan area kerja robot, trajectory generation (pembentukan lintasan). Perencanaan rute dilakukan untuk mendapatkan informasi rute tercepat yang akan dilalui mobile robot, dengan menggunakan algoritma A* yang dikombinasikan dengan teknik image blurring. Teknik image blurring disini digunakan untuk memperbesar halangan (obstacle), sehingga nantinya didapatkan rute yang aman, yaitu rute yang bebas benturan (collision free). Kata kunci: Mobile robot, Algoritma A*, Image blurring, Path planning, Trajectory generatio

    Multi-Sensor Localization and Navigation for Remote Manipulation in Smoky Areas

    Get PDF
    Abstract When localizing mobile sensors and actuators in indoor  environments  laser  meters,  ultrasonic  meters  or  even image processing techniques are usually used. On  the  other  hand,  in  smoky  conditions,  due  to  a  fire  or  building collapse, once the smoke or dust density grows,  optical  methods  are  not  efficient  anymore.  In  these  scenarios  other  type  of  sensors  must  be  used,  such  as  sonar,  radar  or  radiofrequency  signals.  Indoor  localization in low‐visibility  conditions due to  smoke is  one of the EU GUARDIANS [1] project goals.   The developed method aims to position a robot in front  of doors, fire extinguishers and other points of interest  with  enough  accuracy  to  allow  a  human  operator  to  manipulate the robot’s arm in order to actuate over the  element.  In  coarse‐grain  localization,  a  fingerprinting technique  based  on  ZigBee  and  WiFi  signals  is  used,  allowing  the robot  to  navigate  inside  the  building  in  order  to  get  near  the  point  of  interest  that  requires  manipulation.  In  fine‐grained  localization  a  remotely  controlled  programmable  high  intensity  LED  panel  is  used, which acts as a reference to the system in smoky  conditions.  Then,  smoke  detection  and  visual  fine‐ grained localization are used to position the robot with  precisely in the manipulation point (e.g., doors, valves,  etc.)

    Vision-based localization algorithm based on landmark matching, triangulation, reconstruction, and comparison

    Full text link

    Curvature-Based Environment Description for Robot Navigation Using Laser Range Sensors

    Get PDF
    This work proposes a new feature detection and description approach for mobile robot navigation using 2D laser range sensors. The whole process consists of two main modules: a sensor data segmentation module and a feature detection and characterization module. The segmentation module is divided in two consecutive stages: First, the segmentation stage divides the laser scan into clusters of consecutive range readings using a distance-based criterion. Then, the second stage estimates the curvature function associated to each cluster and uses it to split it into a set of straight-line and curve segments. The curvature is calculated using a triangle-area representation where, contrary to previous approaches, the triangle side lengths at each range reading are adapted to the local variations of the laser scan, removing noise without missing relevant points. This representation remains unchanged in translation or rotation, and it is also robust against noise. Thus, it is able to provide the same segmentation results although the scene will be perceived from different viewpoints. Therefore, segmentation results are used to characterize the environment using line and curve segments, real and virtual corners and edges. Real scan data collected from different environments by using different platforms are used in the experiments in order to evaluate the proposed environment description algorithm

    Qualitative localization using vision and odometry for path following in topo-metric maps

    Get PDF
    International audienceWe address the problem of navigation in topo- metric maps created by using odometry data and visual loop- closure detection. Based on our previous work [6], we present an optimized version of our loop-closure detection algorithm that makes it possible to create consistent topo-metric maps in real-time while the robot is teleoperated. Using such a map, the proposed navigation algorithm performs qualitative localization using the same loop-closure detection framework and the odometry data. This qualitative position is used to support robot guidance to follow a predicted path in the topo-metric map compensating the odometry drift. Compared to purely visual servoing approaches for similar tasks, our path-following algorithm is real-time, light (not more than two images per seconds are processed), and robust as odometry is still available to navigate even if vision information is absent for a short time. The approach has been validated experimentally with a Pioneer P3DX robot in indoor environments with embedded and remote computations
    corecore