7 research outputs found

    Monte Carlo Localization in Hand-Drawn Maps

    Full text link
    Robot localization is a one of the most important problems in robotics. Most of the existing approaches assume that the map of the environment is available beforehand and focus on accurate metrical localization. In this paper, we address the localization problem when the map of the environment is not present beforehand, and the robot relies on a hand-drawn map from a non-expert user. We addressed this problem by expressing the robot pose in the pixel coordinate and simultaneously estimate a local deformation of the hand-drawn map. Experiments show that we are able to localize the robot in the correct room with a robustness up to 80

    GraphTinker: Outlier Rejection and Inlier Injection for Pose Graph SLAM

    Get PDF
    In pose graph Simultaneous Localization and Mapping (SLAM) systems, incorrect loop closures can seriously hinder optimizers from converging to correct solutions, significantly degrading both localization accuracy and map consistency. Therefore, it is crucial to enhance their robustness in the presence of numerous false-positive loop closures. Existing approaches tend to fail when working with very unreliable front-end systems, where the majority of inferred loop closures are incorrect. In this paper, we propose a novel middle layer, seamlessly embedded between front and back ends, to boost the robustness of the whole SLAM system. The main contributions of this paper are two-fold: 1) the proposed middle layer offers a new mechanism to reliably detect and remove false-positive loop closures, even if they form the overwhelming majority; 2) artificial loop closures are automatically reconstructed and injected into pose graphs in the framework of an Extended Rauch-Tung-Striebel smoother, reinforcing reliable loop closures. The proposed algorithm alters the graph generated by the front-end and can then be optimized by any back-end system. Extensive experiments are conducted to demonstrate significantly improved accuracy and robustness compared with state-of-the-art methods and various back-ends, verifying the effectiveness of the proposed algorithm

    Pose-graph neural network classifier for global optimality prediction in 2D SLAM

    Get PDF
    The ability to decide if a solution to a pose-graph problem is globally optimal is of high significance for safety-critical applications. Converging to a local-minimum may result in severe estimation errors along the estimated trajectory. In this paper, we propose a graph neural network based on a novel implementation of a graph convolutional-like layer, called PoseConv, to perform classification of pose-graphs as optimal or sub-optimal. The operation of PoseConv required incorporating a new node feature, referred to as cost, to hold the information that the nodes will communicate. A training and testing dataset was generated based on publicly available bench-marking pose-graphs. The neural classifier is then trained and extensively tested on several subsets of the pose-graph samples in the dataset. Testing results have proven the model's capability to perform classification with 92 - 98% accuracy, for the different partitions of the training and testing dataset. In addition, the model was able to generalize to previously unseen variants of pose-graphs in the training dataset. Our method trades a small amount of accuracy for a large improvement in processing time. This makes it faster than other existing methods by up-to three orders of magnitude, which could be of paramount importance when using computationally-limited robots overseen by human operators

    Robot Mapping and Navigation in Real-World Environments

    Get PDF
    Robots can perform various tasks, such as mapping hazardous sites, taking part in search-and-rescue scenarios, or delivering goods and people. Robots operating in the real world face many challenges on the way to the completion of their mission. Essential capabilities required for the operation of such robots are mapping, localization and navigation. Solving all of these tasks robustly presents a substantial difficulty as these components are usually interconnected, i.e., a robot that starts without any knowledge about the environment must simultaneously build a map, localize itself in it, analyze the surroundings and plan a path to efficiently explore an unknown environment. In addition to the interconnections between these tasks, they highly depend on the sensors used by the robot and on the type of the environment in which the robot operates. For example, an RGB camera can be used in an outdoor scene for computing visual odometry, or to detect dynamic objects but becomes less useful in an environment that does not have enough light for cameras to operate. The software that controls the behavior of the robot must seamlessly process all the data coming from different sensors. This often leads to systems that are tailored to a particular robot and a particular set of sensors. In this thesis, we challenge this concept by developing and implementing methods for a typical robot navigation pipeline that can work with different types of the sensors seamlessly both, in indoor and outdoor environments. With the emergence of new range-sensing RGBD and LiDAR sensors, there is an opportunity to build a single system that can operate robustly both in indoor and outdoor environments equally well and, thus, extends the application areas of mobile robots. The techniques presented in this thesis aim to be used with both RGBD and LiDAR sensors without adaptations for individual sensor models by using range image representation and aim to provide methods for navigation and scene interpretation in both static and dynamic environments. For a static world, we present a number of approaches that address the core components of a typical robot navigation pipeline. At the core of building a consistent map of the environment using a mobile robot lies point cloud matching. To this end, we present a method for photometric point cloud matching that treats RGBD and LiDAR sensors in a uniform fashion and is able to accurately register point clouds at the frame rate of the sensor. This method serves as a building block for the further mapping pipeline. In addition to the matching algorithm, we present a method for traversability analysis of the currently observed terrain in order to guide an autonomous robot to the safe parts of the surrounding environment. A source of danger when navigating difficult to access sites is the fact that the robot may fail in building a correct map of the environment. This dramatically impacts the ability of an autonomous robot to navigate towards its goal in a robust way, thus, it is important for the robot to be able to detect these situations and to find its way home not relying on any kind of map. To address this challenge, we present a method for analyzing the quality of the map that the robot has built to date, and safely returning the robot to the starting point in case the map is found to be in an inconsistent state. The scenes in dynamic environments are vastly different from the ones experienced in static ones. In a dynamic setting, objects can be moving, thus making static traversability estimates not enough. With the approaches developed in this thesis, we aim at identifying distinct objects and tracking them to aid navigation and scene understanding. We target these challenges by providing a method for clustering a scene taken with a LiDAR scanner and a measure that can be used to determine if two clustered objects are similar that can aid the tracking performance. All methods presented in this thesis are capable of supporting real-time robot operation, rely on RGBD or LiDAR sensors and have been tested on real robots in real-world environments and on real-world datasets. All approaches have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software

    Meta Information in Graph-based Simultaneous Localisation And Mapping

    Get PDF
    Establishing the spatial and temporal relationships between a robot, and its environment serves as a basis for scene understanding. The established approach in the literature to simultaneously build a representation of the environment, and spatially and temporally localise the robot within the environment, is Simultaneous Localisation And Mapping (SLAM). SLAM algorithms in general, and in particular visual SLAM--where the primary sensors used are cameras--have gained a great amount of attention in the robotics and computer vision communities over the last few decades due to their wide range of applications. The advances in sensing technologies and image-based learning techniques provide an opportunity to introduce additional understanding of the environment to improve the performance of SLAM algorithms. In this thesis, I utilise meta information in a SLAM framework to achieve a robust and consistent representation of the environment and challenge some of the most limiting assumptions in the literature. I exploit structural information associated with geometric primitives, making use of the significant amount of structure present in real world scenes where SLAM algorithms are normally deployed. In particular, I exploit planarity of a group of points and introduce higher-level information associated with orthogonality and parallelism of planes to achieve structural consistency of the returned map. Separately, I also challenge the static world assumption that severely limits the deployment of autonomous mobile robotic systems in a wide range of important real world applications involving highly dynamic and unstructured environments by utilising the semantic and dynamic information in the scene. Most existing techniques try to simplify the problem by ignoring dynamics, relying on a pre-collected database of objects 3D models, imposing some motion constraints or fail to estimate the full SE(3) motions of objects in the scene which makes it infeasible to deploy these algorithms in real life scenarios of unknown and highly dynamic environments. Exploiting semantic and dynamic information in the environment allows to introduce a model-free object-aware SLAM system that is able to achieve robust moving object tracking, accurate estimation of dynamic objects full SE(3) motion, and extract velocity information of moving objects in the scene, resulting in accurate robot localisation and spatio-temporal map estimation

    A Statistical Measure for Map Consistency in SLAM.

    No full text
    Abstract-Map consistency is an important requirement for applications in which mobile robots need to effectively perform autonomous navigation tasks. While recent SLAM techniques provide an increased robustness even in the context of bad initializations or data association outliers, the question of how to determine whether or not the resulting map is consistent is still an open problem. In this paper, we introduce a novel measure for map consistency. We compute this measure by taking into account the discrepancies in the sensor data and leverage it to address two important problems in SLAM. First, we derive a statistical test for assessing whether a map is consistent or not. Second, we employ it to automatically set the free parameter of dynamic covariance scaling, a robust SLAM back-end. We present an evaluation of our approach on over 50 maps sourced from 16 publicly available datasets and illustrate its capability for the inconsistency detection and the tuning of the parameter of the back-end

    A statistical measure for map consistency in SLAM

    No full text
    corecore