1,498 research outputs found

    Pose-graph SLAM sparsification using factor descent

    Get PDF
    Since state of the art simultaneous localization and mapping (SLAM) algorithms are not constant time, it is often necessary to reduce the problem size while keeping as much of the original graph’s information content. In graph SLAM, the problem is reduced by removing nodes and rearranging factors. This is normally faced locally: after selecting a node to be removed, its Markov blanket sub-graph is isolated, the node is marginalized and its dense result is sparsified. The aim of sparsification is to compute an approximation of the dense and non-relinearizable result of node marginalization with a new set of factors. Sparsification consists on two processes: building the topology of new factors, and finding the optimal parameters that best approximate the original dense distribution. This best approximation can be obtained through minimization of the Kullback-Liebler divergence between the two distributions. Using simple topologies such as Chow-Liu trees, there is a closed form for the optimal solution. However, a tree is oftentimes too sparse and produces bad distribution approximations. On the contrary, more populated topologies require nonlinear iterative optimization. In the present paper, the particularities of pose-graph SLAM are exploited for designing new informative topologies and for applying the novel factor descent iterative optimization method for sparsification. Several experiments are provided comparing the proposed topology methods and factor descent optimization with state-of-the-art methods in synthetic and real datasets with regards to approximation accuracy and computational cost.Peer ReviewedPostprint (author's final draft

    Complexity Analysis and Efficient Measurement Selection Primitives for High-Rate Graph SLAM

    Get PDF
    Sparsity has been widely recognized as crucial for efficient optimization in graph-based SLAM. Because the sparsity and structure of the SLAM graph reflect the set of incorporated measurements, many methods for sparsification have been proposed in hopes of reducing computation. These methods often focus narrowly on reducing edge count without regard for structure at a global level. Such structurally-naive techniques can fail to produce significant computational savings, even after aggressive pruning. In contrast, simple heuristics such as measurement decimation and keyframing are known empirically to produce significant computation reductions. To demonstrate why, we propose a quantitative metric called elimination complexity (EC) that bridges the existing analytic gap between graph structure and computation. EC quantifies the complexity of the primary computational bottleneck: the factorization step of a Gauss-Newton iteration. Using this metric, we show rigorously that decimation and keyframing impose favorable global structures and therefore achieve computation reductions on the order of r2/9r^2/9 and r3r^3, respectively, where rr is the pruning rate. We additionally present numerical results showing EC provides a good approximation of computation in both batch and incremental (iSAM2) optimization and demonstrate that pruning methods promoting globally-efficient structure outperform those that do not.Comment: Pre-print accepted to ICRA 201

    Gradient descent for sparse rank-one matrix completion for crowd-sourced aggregation of sparsely interacting workers

    Full text link
    We consider worker skill estimation for the singlecoin Dawid-Skene crowdsourcing model. In practice skill-estimation is challenging because worker assignments are sparse and irregular due to the arbitrary, and uncontrolled availability of workers. We formulate skill estimation as a rank-one correlation-matrix completion problem, where the observed components correspond to observed label correlation between workers. We show that the correlation matrix can be successfully recovered and skills identifiable if and only if the sampling matrix (observed components) is irreducible and aperiodic. We then propose an efficient gradient descent scheme and show that skill estimates converges to the desired global optima for such sampling matrices. Our proof is original and the results are surprising in light of the fact that even the weighted rank-one matrix factorization problem is NP hard in general. Next we derive sample complexity bounds for the noisy case in terms of spectral properties of the signless Laplacian of the sampling matrix. Our proposed scheme achieves state-of-art performance on a number of real-world datasets.Published versio
    • …
    corecore