3,796 research outputs found

    An incremental trust-region method for Robust online sparse least-squares estimation

    Get PDF
    Many online inference problems in computer vision and robotics are characterized by probability distributions whose factor graph representations are sparse and whose factors are all Gaussian functions of error residuals. Under these conditions, maximum likelihood estimation corresponds to solving a sequence of sparse least-squares minimization problems in which additional summands are added to the objective function over time. In this paper we present Robust Incremental least-Squares Estimation (RISE), an incrementalized version of the Powell's Dog-Leg trust-region method suitable for use in online sparse least-squares minimization. As a trust-region method, Powell's Dog-Leg enjoys excellent global convergence properties, and is known to be considerably faster than both Gauss-Newton and Levenberg-Marquardt when applied to sparse least-squares problems. Consequently, RISE maintains the speed of current state-of-the-art incremental sparse least-squares methods while providing superior robustness to objective function nonlinearities.United States. Office of Naval Research (Grant N00014-06-1-0043)United States. Office of Naval Research (Grant N00014-10-1-0936)United States. Air Force Research Laboratory (Contract FA8650-11-C-7137

    RISE: An Incremental Trust-Region Method for Robust Online Sparse Least-Squares Estimation

    Get PDF
    Many point estimation problems in robotics, computer vision, and machine learning can be formulated as instances of the general problem of minimizing a sparse nonlinear sum-of-squares objective function. For inference problems of this type, each input datum gives rise to a summand in the objective function, and therefore performing online inference corresponds to solving a sequence of sparse nonlinear least-squares minimization problems in which additional summands are added to the objective function over time. In this paper, we present Robust Incremental least-Squares Estimation (RISE), an incrementalized version of the Powell's Dog-Leg numerical optimization method suitable for use in online sequential sparse least-squares minimization. As a trust-region method, RISE is naturally robust to objective function nonlinearity and numerical ill-conditioning and is provably globally convergent for a broad class of inferential cost functions (twice-continuously differentiable functions with bounded sublevel sets). Consequently, RISE maintains the speed of current state-of-the-art online sparse least-squares methods while providing superior reliability.United States. Office of Naval Research (Grant N00014-12-1-0093)United States. Office of Naval Research (Grant N00014-11-1-0688)United States. Office of Naval Research (Grant N00014-06-1-0043)United States. Office of Naval Research (Grant N00014-10-1-0936)United States. Air Force Research Laboratory (Contract FA8650-11-C-7137

    Robust Incremental Smoothing and Mapping (riSAM)

    Full text link
    This paper presents a method for robust optimization for online incremental Simultaneous Localization and Mapping (SLAM). Due to the NP-Hardness of data association in the presence of perceptual aliasing, tractable (approximate) approaches to data association will produce erroneous measurements. We require SLAM back-ends that can converge to accurate solutions in the presence of outlier measurements while meeting online efficiency constraints. Existing robust SLAM methods either remain sensitive to outliers, become increasingly sensitive to initialization, or fail to provide online efficiency. We present the robust incremental Smoothing and Mapping (riSAM) algorithm, a robust back-end optimizer for incremental SLAM based on Graduated Non-Convexity. We demonstrate on benchmarking datasets that our algorithm achieves online efficiency, outperforms existing online approaches, and matches or improves the performance of existing offline methods.Comment: Accepted to ICRA 202

    Robust Incremental SLAM under Constrained Optimization Formulation

    Full text link
    © 2016 IEEE. In this letter, we propose a constrained optimization formulation and a robust incremental framework for the simultaneous localization and mapping problem (SLAM). The new SLAM formulation is derived from the nonlinear least squares (NLS) formulation by mathematically formulating loop-closure cycles as constraints. Under the constrained SLAM formulation, we study the robustness of an incremental SLAM algorithm against local minima and outliers as a constraint/loop-closure cycle selection problem. We find a constraint metric that can predict the objective function growth after including the constraint. By the virtue of the constraint metric, we select constraints into the incremental SLAM according to a least objective function growth principle to increase robustness against local minima and perform χ 2 difference test on the constraint metric to increase robustness against outliers. Finally, using sequential quadratic programming (SQP) as the solver, an incremental SLAM algorithm (iSQP) is proposed. Experimental validations are provided to illustrate the accuracy of the constraint metric and the robustness of the proposed incremental SLAM algorithm. Nonetheless, the proposed approach is currently confined to datasets with sparse loop-closures due to its computational cost

    A survey of localization in wireless sensor network

    Get PDF
    Localization is one of the key techniques in wireless sensor network. The location estimation methods can be classified into target/source localization and node self-localization. In target localization, we mainly introduce the energy-based method. Then we investigate the node self-localization methods. Since the widespread adoption of the wireless sensor network, the localization methods are different in various applications. And there are several challenges in some special scenarios. In this paper, we present a comprehensive survey of these challenges: localization in non-line-of-sight, node selection criteria for localization in energy-constrained network, scheduling the sensor node to optimize the tradeoff between localization performance and energy consumption, cooperative node localization, and localization algorithm in heterogeneous network. Finally, we introduce the evaluation criteria for localization in wireless sensor network

    A convex relaxation for approximate global optimization in simultaneous localization and mapping

    Get PDF
    Modern approaches to simultaneous localization and mapping (SLAM) formulate the inference problem as a high-dimensional but sparse nonconvex M-estimation, and then apply general first- or second-order smooth optimization methods to recover a local minimizer of the objective function. The performance of any such approach depends crucially upon initializing the optimization algorithm near a good solution for the inference problem, a condition that is often difficult or impossible to guarantee in practice. To address this limitation, in this paper we present a formulation of the SLAM M-estimation with the property that, by expanding the feasible set of the estimation program, we obtain a convex relaxation whose solution approximates the globally optimal solution of the SLAM inference problem and can be recovered using a smooth optimization method initialized at any feasible point. Our formulation thus provides a means to obtain a high-quality solution to the SLAM problem without requiring high-quality initialization.Google (Firm) (Software Engineering Internship)United States. Office of Naval Research (Grants N00014-10-1-0936, N00014-11-1-0688 and N00014- 13-1-0588)National Science Foundation (U.S.) (Award IIS-1318392
    • …
    corecore