6 research outputs found

    Complexity Analysis and Efficient Measurement Selection Primitives for High-Rate Graph SLAM

    Get PDF
    Sparsity has been widely recognized as crucial for efficient optimization in graph-based SLAM. Because the sparsity and structure of the SLAM graph reflect the set of incorporated measurements, many methods for sparsification have been proposed in hopes of reducing computation. These methods often focus narrowly on reducing edge count without regard for structure at a global level. Such structurally-naive techniques can fail to produce significant computational savings, even after aggressive pruning. In contrast, simple heuristics such as measurement decimation and keyframing are known empirically to produce significant computation reductions. To demonstrate why, we propose a quantitative metric called elimination complexity (EC) that bridges the existing analytic gap between graph structure and computation. EC quantifies the complexity of the primary computational bottleneck: the factorization step of a Gauss-Newton iteration. Using this metric, we show rigorously that decimation and keyframing impose favorable global structures and therefore achieve computation reductions on the order of r2/9r^2/9 and r3r^3, respectively, where rr is the pruning rate. We additionally present numerical results showing EC provides a good approximation of computation in both batch and incremental (iSAM2) optimization and demonstrate that pruning methods promoting globally-efficient structure outperform those that do not.Comment: Pre-print accepted to ICRA 201

    Location utility-based map reduction

    Get PDF
    Maps used for navigation often include a database of location descriptions for place recognition (loop closing), which permits bounded-error performance. A standard pose-graph SLAM system adds a new entry for every new pose into the location database, which grows linearly and unbounded in time and thus becomes unsustainable. To address this issue, in this paper we propose a new map-reduction approach that pre-constructs a fixed-size place-recognition database amenable to the limited storage and processing resources of the vehicle by exploiting the high-level structure of the environment as well as the vehicle motion. In particular, we introduce the concept of location utility - which encapsulates the visitation probability of a location and its spatial distribution relative to nearby locations in the database - as a measure of the value of potential loop-closure events to occur at that location. While finding the optimal reduced location database is NP-hard, we develop an efficient greedy algorithm to sort all the locations in a map based on their relative utility without access to sensor measurements or the vehicle trajectory. This enables pre-determination of a generic, limited-size place-recognition database containing the N best locations in the environment. To validate the proposed approach, we develop an open-source street-map simulator using real city-map data and show that an accurate map (pose-graph) can be attained even when using a place-recognition database with only 1% of the entries of the corresponding full database.Charles Stark Draper Laboratory (Fellowship

    Information-Driven Direct RGB-D Odometry

    Get PDF
    This paper presents an information-theoretic approach to point selection for direct RGB-D odometry. The aim is to select only the most informative measurements, in order to reduce the optimization problem with a minimal impact in the accuracy. It is usual practice in visual odometry/SLAM to track several hundreds of points, achieving real-time performance in high-end desktop PCs. Reducing their computational footprint will facilitate the implementation of odometry and SLAM in low-end platforms such as small robots and AR/VR glasses. Our experimental results show that our novel information-based selection criteria allows us to reduce the number of tracked points an order of magnitude (down to only 24 of them), achieving an accuracy similar to the state of the art (sometimes outperforming it) while reducing 10× the computational demand

    Kullback-leibler divergence based graph pruning in robotic feature mapping

    Full text link
    In pose feature graph simultaneous localization and mapping, the robot poses and feature positions are treated as graph nodes and the odometry and observations are treated as edges. The size of the graph exerts an important influence on the efficiency of the graph optimization. Conventionally, the size of the graph is kept small by discarding the current frame if it is not spatially far enough from the previous one or not informative enough. However, these approaches cannot discard the already preserved frames when the robot re-visits the previously explored area. We propose a measure derived from Kullbach-Leibler divergence to decide whether a frame should be discarded, achieving an online implementation of the graph pruning algorithm for feature mapping, of which the pruned frame can be any of the preserved frames. The experimental results using real world datasets show that the proposed pruning algorithm can effectively reduce the size of the graph while maintaining the map accuracy. © 2013 IEEE

    Generic Node Removal for Factor-Graph SLAM

    Full text link

    Long-Term Simultaneous Localization and Mapping in Dynamic Environments.

    Full text link
    One of the core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the robot must build a representation of the environment and localize itself within this representation. This process, known as simultaneous localization and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot's sensory observations as it moves through the environment, and by observing the robot's ego-motion through proprioceptive sensors, constraints are placed on the trajectory of the robot and the configuration of the environment. This results in a probabilistic optimization problem to find the most likely robot trajectory and environment configuration given all of the robot's previous sensory experience. SLAM has been well studied under the assumptions that the robot operates for a relatively short time period and that the environment is essentially static during operation. However, performing SLAM over long time periods while modeling the dynamic changes in the environment remains a challenge. The goal of this thesis is to extend the capabilities of SLAM to enable long-term autonomous operation in dynamic environments. The contribution of this thesis has three main components: First, we propose a framework for controlling the computational complexity of the SLAM optimization problem so that it does not grow unbounded with exploration time. Second, we present a method to learn visual feature descriptors that are more robust to changes in lighting, allowing for improved data association in dynamic environments. Finally, we use the proposed tools in SLAM systems that explicitly models the dynamics of the environment in the map by representing each location as a set of example views that capture how the location changes with time. We experimentally demonstrate that the proposed methods enable long-term SLAM in dynamic environments using a large, real-world vision and LIDAR dataset collected over the course of more than a year. This dataset captures a wide variety of dynamics: from short-term scene changes including moving people, cars, changing lighting, and weather conditions; to long-term dynamics including seasonal conditions and structural changes caused by construction.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111538/1/carlevar_1.pd
    corecore