459 research outputs found

    Autonomous navigation with constrained consistency for C-Ranger

    Get PDF
    Autonomous underwater vehicles (AUVs) have become the most widely used tools for undertaking complex exploration tasks in marine environments. Their synthetic ability to carry out localization autonomously and build an environmental map concurrently, in other words, simultaneous localization and mapping (SLAM), are considered to be pivotal requirements for AUVs to have truly autonomous navigation. However, the consistency problem of the SLAM system has been greatly ignored during the past decades. In this paper, a consistency constrained extended Kalman filter (EKF) SLAM algorithm, applying the idea of local consistency, is proposed and applied to the autonomous navigation of the C-Ranger AUV, which is developed as our experimental platform. The concept of local consistency (LC) is introduced after an explicit theoretical derivation of the EKF-SLAM system. Then, we present a locally consistency-constrained EKF-SLAM design, LC-EKF, in which the landmark estimates used for linearization are fixed at the beginning of each local time period, rather than evaluated at the latest landmark estimates. Finally, our proposed LC-EKF algorithm is experimentally verified, both in simulations and sea trials. The experimental results show that the LC-EKF performs well with regard to consistency, accuracy and computational efficiency

    Towards consistent visual-inertial navigation

    Get PDF
    Visual-inertial navigation systems (VINS) have prevailed in various applications, in part because of the complementary sensing capabilities and decreasing costs as well as sizes. While many of the current VINS algorithms undergo inconsistent estimation, in this paper we introduce a new extended Kalman filter (EKF)-based approach towards consistent estimates. To this end, we impose both state-transition and obervability constraints in computing EKF Jacobians so that the resulting linearized system can best approximate the underlying nonlinear system. Specifically, we enforce the propagation Jacobian to obey the semigroup property, thus being an appropriate state-transition matrix. This is achieved by parametrizing the orientation error state in the global, instead of local, frame of reference, and then evaluating the Jacobian at the propagated, instead of the updated, state estimates. Moreover, the EKF linearized system ensures correct observability by projecting the most-accurate measurement Jacobian onto the observable subspace so that no spurious information is gained. The proposed algorithm is validated by both Monte-Carlo simulation and real-world experimental tests.United States. Office of Naval Research (N00014-12-1- 0093, N00014-10-1-0936, N00014-11-1-0688, and N00014-13-1-0588)National Science Foundation (U.S.) (Grant IIS-1318392

    KD-EKF: A Consistent Cooperative Localization Estimator Based on Kalman Decomposition

    Full text link
    In this paper, we revisit the inconsistency problem of EKF-based cooperative localization (CL) from the perspective of system decomposition. By transforming the linearized system used by the standard EKF into its Kalman observable canonical form, the observable and unobservable components of the system are separated. Consequently, the factors causing the dimension reduction of the unobservable subspace are explicitly isolated in the state propagation and measurement Jacobians of the Kalman observable canonical form. Motivated by these insights, we propose a new CL algorithm called KD-EKF which aims to enhance consistency. The key idea behind the KD-EKF algorithm involves perform state estimation in the transformed coordinates so as to eliminate the influencing factors of observability in the Kalman observable canonical form. As a result, the KD-EKF algorithm ensures correct observability properties and consistency. We extensively verify the effectiveness of the KD-EKF algorithm through both Monte Carlo simulations and real-world experiments. The results demonstrate that the KD-EKF outperforms state-of-the-art algorithms in terms of accuracy and consistency

    On State Estimation in Multi-Sensor Fusion Navigation: Optimization and Filtering

    Full text link
    The essential of navigation, perception, and decision-making which are basic tasks for intelligent robots, is to estimate necessary system states. Among them, navigation is fundamental for other upper applications, providing precise position and orientation, by integrating measurements from multiple sensors. With observations of each sensor appropriately modelled, multi-sensor fusion tasks for navigation are reduced to the state estimation problem which can be solved by two approaches: optimization and filtering. Recent research has shown that optimization-based frameworks outperform filtering-based ones in terms of accuracy. However, both methods are based on maximum likelihood estimation (MLE) and should be theoretically equivalent with the same linearization points, observation model, measurements, and Gaussian noise assumption. In this paper, we deeply dig into the theories and existing strategies utilized in both optimization-based and filtering-based approaches. It is demonstrated that the two methods are equal theoretically, but this equivalence corrupts due to different strategies applied in real-time operation. By adjusting existing strategies of the filtering-based approaches, the Monte-Carlo simulation and vehicular ablation experiments based on visual odometry (VO) indicate that the strategy adjusted filtering strictly equals to optimization. Therefore, future research on sensor-fusion problems should concentrate on their own algorithms and strategies rather than state estimation approaches

    Know What You Don't Know: Consistency in Sliding Window Filtering with Unobservable States Applied to Visual-Inertial SLAM (Extended Version)

    Full text link
    Estimation algorithms, such as the sliding window filter, produce an estimate and uncertainty of desired states. This task becomes challenging when the problem involves unobservable states. In these situations, it is critical for the algorithm to ``know what it doesn't know'', meaning that it must maintain the unobservable states as unobservable during algorithm deployment. This letter presents general requirements for maintaining consistency in sliding window filters involving unobservable states. The value of these requirements when designing a navigation solution is experimentally shown within the context of visual-inertial SLAM making use of IMU preintegration.Comment: Main paper submitted to RAL. Main paper has 8 pages, 4 figures. Supplemental materials are 6 pages, 0 figures after the main pape

    Iterated D-SLAM map joining: Evaluating its performance in terms of consistency, accuracy and efficiency

    Full text link
    This paper presents a new map joining algorithm and a set of metrics for evaluating the performance of mapping techniques. The input to the new map joining algorithm is a sequence of local maps containing the feature positions and the final robot pose in a local frame of reference. The output is a global map containing the global positions of all the features but without any robot poses. The algorithm is built on the D-SLAM mapping algorithm (Wang et al. in Int. J. Robot. Res. 26(2):187-204, 2007) and uses iterations to improve the estimates in the map joining step. So it is called Iterated D-SLAM Map Joining (I-DMJ). When joining maps I-DMJ ignores the odometry information connecting successive maps. This is the key to I-DMJ efficiency, because it makes both the information matrix exactly sparse and the size of the state vector bounded by the number of features. The paper proposes metrics for quantifying the performance of different mapping algorithms focusing on evaluating their consistency, accuracy and efficiency. The I-DMJ algorithm and a number of existing SLAM algorithms are evaluated using the proposed metrics. The simulation data sets and a preprocessed Victoria Park data set used in this paper are made available to enable interested researchers to compare their mapping algorithms with I-DMJ. © 2009 Springer Science+Business Media, LLC

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy

    Improving SLAM with Drift Integration

    Get PDF
    International audienceLocalization without prior knowledge can be a difficult task for a vehicle. An answer to this problematic lies in the Simultaneous Localization And Mapping (SLAM) approach where a map of the surroundings is built while simultaneously being used for localization purposes. However, SLAM algorithms tend to drift over time, making the localization inconsistent. In this paper, we propose to model the drift as a localization bias and to integrate it in a general architecture. The latter allows any feature-based SLAM algorithm to be used while taking advantage of the drift integration. Based on previous works, we extend the bias concept and propose a new architecture which drastically improves the performance of our method, both in terms of computational power and memory required. We validate this framework on real data with different scenarios. We show that taking into account the drift allows us to maintain consistency and improve the localization accuracy with almost no additional cost
    • …
    corecore