9 research outputs found

    Efficient 2D Graph SLAM for Sparse Sensing

    Full text link
    Simultaneous localization and mapping (SLAM) plays a vital role in mapping unknown spaces and aiding autonomous navigation. Virtually all state-of-the-art solutions today for 2D SLAM are designed for dense and accurate sensors such as laser range-finders (LiDARs). However, these sensors are not suitable for resource-limited nano robots, which become increasingly capable and ubiquitous nowadays, and these robots tend to mount economical and low-power sensors that can only provide sparse and noisy measurements. This introduces a challenging problem called SLAM with sparse sensing. This work addresses the problem by adopting the form of the state-of-the-art graph-based SLAM pipeline with a novel frontend and an improvement for loop closing in the backend, both of which are designed to work with sparse and uncertain range data. Experiments show that the maps constructed by our algorithm have superior quality compared to prior works on sparse sensing. Furthermore, our method is capable of running in real-time on a modern PC with an average processing time of 1/100th the input interval time.Comment: Accepted for 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Tactile SLAM with a biomimetic whiskered robot

    Get PDF
    Future robots may need to navigate where visual sensors fail. Touch sensors provide an alternative modality, largely unexplored in the context of robotic map building. We present the first results in grid based simultaneous localisation and mapping (SLAM) with biomimetic whisker sensors, and show how multi-whisker features coupled with priors about straight edges in the world can boost its performance. Our results are from a simple, small environment but are intended as a first baseline to measure future algorithms against

    Towards hierarchical blackboard mapping on a whiskered robot

    Get PDF
    The paradigm case for robotic mapping assumes large quantities of sensory information which allow the use of relatively weak priors. In contrast, the present study considers the mapping problem for a mobile robot, CrunchBot, where only sparse, local tactile information from whisker sensors is available. To compensate for such weak likelihood information, we make use of low-level signal processing and strong hierarchical object priors. Hierarchical models were popular in classical blackboard systems but are here applied in a Bayesian setting as a mapping algorithm. The hierarchical models require reports of whisker distance to contact and of surface orientation at contact, and we demonstrate that this information can be retrieved by classifiers from strain data collected by CrunchBot's physical whiskers. We then provide a demonstration in simulation of how this information can be used to build maps (but not yet full SLAM) in an zero-odometry-noise environment containing walls and table-like hierarchical objects. © 2012 Elsevier B.V. All rights reserved

    Simultaneous localization and mapping with limited sensing using Extended Kalman Filter and Hough transform

    Get PDF
    Problem robota da izradi kartu nepoznatog okruženja uz ispravljanje vlastitog položaja na temelju iste karte i podataka senzora naziva se problem simultanog lokaliziranja i kartiranja (mapiranja). Budući da je točnost i preciznost senzora od velike važnosti u rješavanju tog problema, većina predloženih sustava uključuje primjenu skupih laserskih senzora daljine te relativno novije i jeftinije RGB-D kamere. Laserski senzori daljine su preskupi za neke primjene, a RGB-D kamere imaju veliku snagu, CPU ili sve što je potrebno za obradu podataka direktno ili na PC-u. Za izradu jeftinog robota bolje je primijeniti senzore niske cijene (poput infracrvenih ili sonarnih). Cilj je ovoga rada izraditi kartu nepoznatog okruženja uz primjenu jeftinog robota, produljenog Kalman filtra i linearnih obilježja, kao što su zidovi i namještaj. Ovdje se također predlaže pristup zatvaranja petlje. Eksperimenti su provedeni u okruženju Webots simulacije.The problem of a robot to create a map of an unknown environment while correcting its own position based on the same map and sensor data is called Simultaneous Localization and Mapping problem. As the accuracy and precision of the sensors have an important role in this problem, most of the proposed systems include the usage of high cost laser range sensors, and relatively newer and cheaper RGB-D cameras. Laser range sensors are too expensive for some implementations, and RGB-D cameras bring high power, CPU or communication requirements to process data on-board or on a PC. In order to build a low-cost robot it is more appropriate to use low-cost sensors (like infrared and sonar). In this study it is aimed to create a map of an unknown environment using a low cost robot, Extended Kalman Filter and linear features like walls and furniture. A loop closing approach is also proposed here. Experiments are performed in Webots simulation environment

    Geometry model for marker-based localisation

    Get PDF
    This work presents a novel approach for position estimation from monocularvision. It has been shown that vision systems have great capability in reaching precise and accurate measurements and are becoming the state-of-the-art innavigation. Navigation systems have only been integrated in commercial mobile robots since the early 2000s, and yet localisation in a dynamic environmentthat form the main building block of navigation, has no truly elegant solution.Solutions are many and their strategies and methods differ depending on theapplication. For the lack of a single accurate procedure, methods are combinedwhich make use of different sensors fusion. This thesis focus on the use of monocular vision sensor to develop an accurate Marker-Based positioning system thatcan be used in various applications in outdoor, in agriculture for example, andin other indoor applications. Many contributions arouse here in this context. Amain contribution is in perspective distortion correction in which distortions aremodeled in all its forms with correction process. This is essential when dealingwith measurements and shapes in images. Because of the lack of robustness indepth sensing using monocular vision-based system, a second contribution is inthe novel spherical marker-based approach position captured, which is designedand developed within the concept of relative pose estimation. In this Model-Basedposition estimation, relative position can be extracted instantaneously withoutthe need of prior knowledge of the previous state of the camera, as it relies onmonocular image. This model can as well compensate for the lack of knowledge inthe scale of the real world, for example in the case of Monocular Visual Simultaneous Localisation and Mapping (VSLAM). In addition to these contributions, someexperimental and simulation evidence presented in this work has shown feasibilityof the reading measurements like distance capture and relative pose between themarker-based model and the observer, with reliability and high accuracy. Thesystem has shown the ability to track accurately the object at a farthest possibleposition from low resolution digital images and from a single viewpoint. Whilethe main application field targeted is tracking mobile-robots, other applicationscan profit from this concept like motion capture and application related to thefield of topography
    corecore