174 research outputs found

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Long-Term Simultaneous Localization and Mapping in Dynamic Environments.

    Full text link
    One of the core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the robot must build a representation of the environment and localize itself within this representation. This process, known as simultaneous localization and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot's sensory observations as it moves through the environment, and by observing the robot's ego-motion through proprioceptive sensors, constraints are placed on the trajectory of the robot and the configuration of the environment. This results in a probabilistic optimization problem to find the most likely robot trajectory and environment configuration given all of the robot's previous sensory experience. SLAM has been well studied under the assumptions that the robot operates for a relatively short time period and that the environment is essentially static during operation. However, performing SLAM over long time periods while modeling the dynamic changes in the environment remains a challenge. The goal of this thesis is to extend the capabilities of SLAM to enable long-term autonomous operation in dynamic environments. The contribution of this thesis has three main components: First, we propose a framework for controlling the computational complexity of the SLAM optimization problem so that it does not grow unbounded with exploration time. Second, we present a method to learn visual feature descriptors that are more robust to changes in lighting, allowing for improved data association in dynamic environments. Finally, we use the proposed tools in SLAM systems that explicitly models the dynamics of the environment in the map by representing each location as a set of example views that capture how the location changes with time. We experimentally demonstrate that the proposed methods enable long-term SLAM in dynamic environments using a large, real-world vision and LIDAR dataset collected over the course of more than a year. This dataset captures a wide variety of dynamics: from short-term scene changes including moving people, cars, changing lighting, and weather conditions; to long-term dynamics including seasonal conditions and structural changes caused by construction.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111538/1/carlevar_1.pd

    Theory, Design, and Implementation of Landmark Promotion Cooperative Simultaneous Localization and Mapping

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a challenging problem in practice, the use of multiple robots and inexpensive sensors poses even more demands on the designer. Cooperative SLAM poses specific challenges in the areas of computational efficiency, software/network performance, and robustness to errors. New methods in image processing, recursive filtering, and SLAM have been developed to implement practical algorithms for cooperative SLAM on a set of inexpensive robots. The Consolidated Unscented Mixed Recursive Filter (CUMRF) is designed to handle non-linear systems with non-Gaussian noise. This is accomplished using the Unscented Transform combined with Gaussian Mixture Models. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis (PCA) and the X84 outlier rejection rule. Forgetful SLAM is a local SLAM technique that runs in nearly constant time relative to the number of visible landmarks and improves poor performing sensors through sensor fusion and outlier rejection. Forgetful SLAM correlates all measured observations, but stops the state from growing over time. Hierarchical Active Ripple SLAM (HAR-SLAM) is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple robots, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of robots, landmarks, and robots poses. This dissertation presents explicit methods for closing-the-loop, joining multiple robots, and active updates. Landmark Promotion SLAM is a hierarchy of new SLAM methods, using the Robust Kalman Filter, Forgetful SLAM, and HAR-SLAM. Practical aspects of SLAM are a focus of this dissertation. LK-SURF is a new image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking. Typical stereo correspondence techniques fail at providing descriptors for features, or fail at temporal tracking. Several calibration and modeling techniques are also covered, including calibrating stereo cameras, aligning stereo cameras to an inertial system, and making neural net system models. These methods are important to improve the quality of the data and images acquired for the SLAM process

    Robust state estimation methods for robotics applications

    Get PDF
    State estimation is an integral component of any autonomous robotic system. Finding the correct position, velocity, and orientation of an agent in its environment enables it to do other tasks like mapping and interacting with the environment, and collaborating with other agents. State estimation is achieved by using data obtained from multiple sensors and fusing them in a probabilistic framework. These include inertial data from Inertial Measurement Unit (IMU), images from camera, range data from lidars, and positioning data from Global Navigation Satellite Systems (GNSS) receivers. The main challenge faced in sensor-based state estimation is the presence of noisy, erroneous, and even lack of informative data. Some common examples of such situations include wrong feature matching between images or point clouds, false loop-closures due to perceptual aliasing (different places that look similar can confuse the robot), presence of dynamic objects in the environment (odometry algorithms assume a static environment), multipath errors for GNSS (signals for satellites jumping off tall structures like buildings before reaching receivers) and more. This work studies existing and new ways of how standard estimation algorithms like the Kalman filter and factor graphs can be made robust to such adverse conditions without losing performance in ideal outlier-free conditions. The first part of this work demonstrates the importance of robust Kalman filters on wheel-inertial odometry for high-slip terrain. Next, inertial data is integrated into GNSS factor graphs to improve the accuracy and robustness of GNSS factor graphs. Lastly, a combined framework for improving the robustness of non-linear least squares and estimating the inlier noise threshold is proposed and tested with point cloud registration and lidar-inertial odometry algorithms followed by an algorithmic analysis of optimizing generalized robust cost functions with factor graphs for GNSS positioning problem

    Simultaneous Localization and Mapping Systems Robust to Perceptual Aliasing

    Get PDF
    De nos jours, la robotique gagne rapidement en popularité et promet un large éventail de nouvelles applications. Bien que le marché actuel soit dominé par les robots téléguidés, plusieurs compagnies cherchent à révolutionner notre quotidien avec des robots pleinement autonomes comme les voitures sans conducteur. En effet, les géants des technologies de partout dans le monde nous promettent régulièrement de nouvelles percées extraordinaires au niveau de l’autonomie des robots et multiplient des démonstrations plus impressionnantes les unes que les autres. Toutefois, ces systèmes autonomes devront se prouver extrêmement fiables et sécuritaires afin d’obtenir l’acceptabilité sociale nécessaire à leur succès. Malheureusement, les techniques présentement offertes par la littérature scientifique n’ont pas un niveau de robustesse à la hauteur des attentes de la population. C’est pourquoi les chercheurs universitaires et industriels doivent redoubler d’efforts afin de trouver de meilleures solutions qui sauront inspirer la confiance du public envers les systèmes robotiques autonomes. En particulier, une des composantes cruciales de tels systèmes est la localisation du robot dans son environnement. Cette composante est essentielle pour le déploiement de robots dans des environnements sans GPS (ex. à l’intérieur, sous terre, sous l’eau, etc.), puisque dans ces situations un robot doit estimer précisément sa position sur la seule base des mesures extraites à partir de ses propres senseurs. Pour y parvenir, une des techniques les plus populaires est la cartographie et localisation simultanée (SLAM) lors de laquelle un robot construit une carte de son environnement afin de suivre et estimer son propre mouvement et sa position. Cette technique est efficace, mais elle est tout de même vulnérable aux erreurs d’association et à la présence de mesures aberrantes. Les ingénieurs contournent généralement ce problème en performant une calibration très précise. Une telle calibration spécifique à l’environnement d’opération est appropriée pour des environnements très contrôlés comme ceux qu’on retrouve dans les laboratoires de recherche. Par contre, cette solution n’est pas viable pour des systèmes robotiques vendus au grand public et opérés par des utilisateurs sans formation. Une des principales causes d’erreurs en cartographie et localisation simultanée est l’aliasing perceptuel. Ce phénomène engendre des mesures aberrantes lorsqu’un robot confond deux endroits différents comme étant le même. L’addition de mesures aberrantes dans l’estimateur mène généralement à l’échec complet du système et donc possiblement à des conséquences dramatiques en termes de sécurité. Afin d’offrir des solutions à ces enjeux de robustesse, ce mémoire propose deux contributions à la littérature scientifique. La première introduit une nouvelle formulation pour le problème d’optimisation au coeur de la cartographie et localisation simultanée. Cette nouvelle formulation inclut un modèle explicite du phénomène d’aliasing perceptuel de façon à rejeter efficacement les mesures aberrantes. La seconde présente une nouvelle méthode de cartographie et localisation simultanée pour systèmes multi-robot qui est distribuée et robuste aux mesures aberrantes. Cette contribution est particulièrement importante puisque les systèmes multi-robots sont davantage vulnérables à l’aliasing perceptuel que les systèmes avec un seul robot. Plusieurs résultats expérimentaux obtenus lors de simulations, avec des jeux de données réelles et sur le terrain montrent que les techniques proposées produisent des estimés précis de localisation en présence de mesures aberrantes.----------ABSTRACT: Autonomous robotics is growing fast in popularity and has a large range of potential new applications. While the current market is dominated by human-controlled robots, many companies aim to revolutionize our daily lives by focusing on autonomous robotic platforms such as self-driving cars. Indeed, companies around the world regularly promise ground-breaking innovations and show very impressive demontrations of autonomous robots. However, to get the public acceptance they need to prosper, those autonomous systems have to be as safe and as reliable as possible. Unfortunately, the current implementations are not yet sufficiently robust, so academic and industrial researchers need to investigate better and more trustworthy solutions to the many challenges of autonomous navigation and behaviors. In particular, one of the most crucial components of most autonomous systems is the self-localization mechanism. This component is essential for the deployment of robots in GPS-denied environments (e.g. indoors, underground, submarine, etc.) since a robot would need to estimate is own position in its environment based on the measurements acquired by its own onboard sensors. In that regard, one of the most popular techniques is the simultaneous localization and mapping (SLAM) approach in which the robot builds a map of its surrounding environment to track and estimate its own movements and position. This technique has been proven to be very efficient, but it is also known as quite vulnerable to data association errors and the presence of spurious measurements. Engineers often circumvent those problems by doing a very precise, yet cumbersome, parameter tuning. Such environment-specific parameter tuning is appropriate for the controlled environment found in research laboratories, but it is by no means a sufficient solution for consumer robots deployed in the wild and sold to untrained customers. One of the main causes of errors in SLAM is the perceptual aliasing phenomenon in which two different places are confused as the same by the robot. This phenomenon leads to the addition of spurious measurements in the estimation mechanism which in turn leads to the failure of the whole system. In regard to the robustness challenges in SLAM systems, this thesis proposes two contributions to the scientific literature. The first introduces a new robust formulation of the core optimization problem in SLAM that models explicitly the perceptual aliasing phenomenon to efficiently reject spurious measurements. The second presents a distributed, online and robust solution for multi-robot SLAM in robotic teams. This contribution is particularly important since multi-robot systems are more vulnerable to perceptual aliasing than single-robot systems. Extensive experimental results in simulation, on datasets and on the field show that the proposed techniques can produce accurate localization estimates in the presence of spurious measurements

    SEM-GAT: Explainable Semantic Pose Estimation using Learned Graph Attention

    Full text link
    This paper proposes a Graph Neural Network(GNN)-based method for exploiting semantics and local geometry to guide the identification of reliable pointcloud registration candidates. Semantic and morphological features of the environment serve as key reference points for registration, enabling accurate lidar-based pose estimation. Our novel lightweight static graph structure informs our attention-based node aggregation network by identifying semantic-instance relationships, acting as an inductive bias to significantly reduce the computational burden of pointcloud registration. By connecting candidate nodes and exploiting cross-graph attention, we identify confidence scores for all potential registration correspondences and estimate the displacement between pointcloud scans. Our pipeline enables introspective analysis of the model's performance by correlating it with the individual contributions of local structures in the environment, providing valuable insights into the system's behaviour. We test our method on the KITTI odometry dataset, achieving competitive accuracy compared to benchmark methods and a higher track smoothness while relying on significantly fewer network parameters.Comment: International Conference on Advanced Robotics (ICAR 2023

    SEM-GAT: explainable semantic pose estimation using learned graph attention

    Get PDF
    This paper proposes a Graph Neural Network (GNN)-based method for exploiting semantics and local geometry to guide the identification of reliable pointcloud registration candidates. Semantic and morphological features of the environment serve as key reference points for registration, enabling accurate lidar-based pose estimation. Our novel lightweight static graph structure informs our attention-based node aggregation network by identifying semantic-instance relationships, acting as an inductive bias to significantly reduce the computational burden of pointcloud registration. By connecting candidate nodes and exploiting cross-graph attention, we identify confidence scores for all potential registration correspondences and estimate the displacement between pointcloud scans. Our pipeline enables introspective analysis of the model’s performance by correlating it with the individual contributions of local structures in the environment, providing valuable insights into the system’s behaviour. We test our method on the KITTI odometry dataset, achieving competitive accuracy compared to benchmark methods and a higher track smoothness while relying on significantly fewer network parameters
    • …
    corecore