21 research outputs found

    Complexity Analysis and Efficient Measurement Selection Primitives for High-Rate Graph SLAM

    Get PDF
    Sparsity has been widely recognized as crucial for efficient optimization in graph-based SLAM. Because the sparsity and structure of the SLAM graph reflect the set of incorporated measurements, many methods for sparsification have been proposed in hopes of reducing computation. These methods often focus narrowly on reducing edge count without regard for structure at a global level. Such structurally-naive techniques can fail to produce significant computational savings, even after aggressive pruning. In contrast, simple heuristics such as measurement decimation and keyframing are known empirically to produce significant computation reductions. To demonstrate why, we propose a quantitative metric called elimination complexity (EC) that bridges the existing analytic gap between graph structure and computation. EC quantifies the complexity of the primary computational bottleneck: the factorization step of a Gauss-Newton iteration. Using this metric, we show rigorously that decimation and keyframing impose favorable global structures and therefore achieve computation reductions on the order of r2/9r^2/9 and r3r^3, respectively, where rr is the pruning rate. We additionally present numerical results showing EC provides a good approximation of computation in both batch and incremental (iSAM2) optimization and demonstrate that pruning methods promoting globally-efficient structure outperform those that do not.Comment: Pre-print accepted to ICRA 201

    Incremental Sparse GP Regression for Continuous-time Trajectory Estimation & Mapping

    Get PDF
    Recent work on simultaneous trajectory estimation and mapping (STEAM) for mobile robots has found success by representing the trajectory as a Gaussian process. Gaussian processes can represent a continuous-time trajectory, elegantly handle asynchronous and sparse measurements, and allow the robot to query the trajectory to recover its estimated position at any time of interest. A major drawback of this approach is that STEAM is formulated as a batch estimation problem. In this paper we provide the critical extensions necessary to transform the existing batch algorithm into an extremely efficient incremental algorithm. In particular, we are able to vastly speed up the solution time through efficient variable reordering and incremental sparse updates, which we believe will greatly increase the practicality of Gaussian process methods for robot mapping and localization. Finally, we demonstrate the approach and its advantages on both synthetic and real datasets.Comment: 10 pages, 10 figure

    Kernel solver design of FPGA-based real-time simulator for active distribution networks

    Get PDF
    The field-programmable gate array (FPGA)-based real-time simulator takes advantage of many merits of FPGA, such as small time-step, high simulation precision, rich I/O interface resources, and low cost. The sparse linear equations formed by the node conductance matrix need to be solved repeatedly within each time-step, which introduces great challenges to the performance of the real-time simulator. In this paper, a fine-grained solver of the FPGA-based real-time simulator for active distribution networks is designed to meet the computational demand. The framework of the solver, offline process design on PC and online process design on FPGA are proposed in detail. The modified IEEE 33-node system with photovoltaics is simulated on a 4-FPGA-based real-time simulator. Simulation results are compared with PSCAD/EMTDC under the same conditions to validate the solver design

    Stochastic Bundle Adjustment for Efficient and Scalable 3D Reconstruction

    Full text link
    Current bundle adjustment solvers such as the Levenberg-Marquardt (LM) algorithm are limited by the bottleneck in solving the Reduced Camera System (RCS) whose dimension is proportional to the camera number. When the problem is scaled up, this step is neither efficient in computation nor manageable for a single compute node. In this work, we propose a stochastic bundle adjustment algorithm which seeks to decompose the RCS approximately inside the LM iterations to improve the efficiency and scalability. It first reformulates the quadratic programming problem of an LM iteration based on the clustering of the visibility graph by introducing the equality constraints across clusters. Then, we propose to relax it into a chance constrained problem and solve it through sampled convex program. The relaxation is intended to eliminate the interdependence between clusters embodied by the constraints, so that a large RCS can be decomposed into independent linear sub-problems. Numerical experiments on unordered Internet image sets and sequential SLAM image sets, as well as distributed experiments on large-scale datasets, have demonstrated the high efficiency and scalability of the proposed approach. Codes are released at https://github.com/zlthinker/STBA.Comment: Accepted by ECCV 202

    Reliability-based G1 Continuous Arc Spline Approximation

    Full text link
    In this paper, we present an algorithm to approximate a set of data points with G1 continuous arcs, using points' covariance data. To the best of our knowledge, previous arc spline approximation approaches assumed that all data points contribute equally (i.e. have the same weights) during the approximation process. However, this assumption may cause serious instability in the algorithm, if the collected data contains outliers. To resolve this issue, a robust method for arc spline approximation is suggested in this work, assuming that the 2D covariance for each data point is given. Starting with the definition of models and parameters for single arc approximation, the framework is extended to multiple-arc approximation for general usage. Then the proposed algorithm is verified using generated noisy data and real-world collected data via vehicle experiment in Sejong City, South Korea.Comment: 42 pages, 19 figures, Submitted to Computer Aided Geometric Desig

    Hypergraph-based Unsymmetric Nested Dissection Ordering for Sparse LU Factorization

    Get PDF
    In this paper we present HUND, a hypergraph-based unsymmetric nested dissection ordering algorithm for reducing the fill-in incurred during Gaussian elimination. HUND has several important properties. It takes a global perspective of the entire matrix, as opposed to local heuristics. It takes into account the assymetry of the input matrix by using a hypergraph to represent its structure. It is suitable for performing Gaussian elimination in parallel, with partial pivoting. This is possible because the row permutations performed due to partial pivoting do not destroy the column separators identified by the nested dissection approach. Experimental results on 27 medium and large size highly unsymmetric matrices compare HUND to four other well-known reordering algorithms. The results show that HUND provides a robust reordering algorithm, in the sense that it is the best or close to the best (often within 10%10\%) of all the other methods
    corecore