94 research outputs found

    Robust Legged Robot State Estimation Using Factor Graph Optimization

    Full text link
    Legged robots, specifically quadrupeds, are becoming increasingly attractive for industrial applications such as inspection. However, to leave the laboratory and to become useful to an end user requires reliability in harsh conditions. From the perspective of state estimation, it is essential to be able to accurately estimate the robot's state despite challenges such as uneven or slippery terrain, textureless and reflective scenes, as well as dynamic camera occlusions. We are motivated to reduce the dependency on foot contact classifications, which fail when slipping, and to reduce position drift during dynamic motions such as trotting. To this end, we present a factor graph optimization method for state estimation which tightly fuses and smooths inertial navigation, leg odometry and visual odometry. The effectiveness of the approach is demonstrated using the ANYmal quadruped robot navigating in a realistic outdoor industrial environment. This experiment included trotting, walking, crossing obstacles and ascending a staircase. The proposed approach decreased the relative position error by up to 55% and absolute position error by 76% compared to kinematic-inertial odometry.Comment: 8 pages, 12 figures. Accepted to RA-L + IROS 2019, July 201

    KEYFRAME-BASED VISUAL-INERTIAL SLAM USING NONLINEAR OPTIMIZATION

    Get PDF
    Abstract—The fusion of visual and inertial cues has become popular in robotics due to the complementary nature of the two sensing modalities. While most fusion strategies to date rely on filtering schemes, the visual robotics community has recently turned to non-linear optimization approaches for tasks such as visual Simultaneous Localization And Mapping (SLAM), following the discovery that this comes with significant advantages in quality of performance and computational complexity. Following this trend, we present a novel approach to tightly integrate visual measurements with readings from an Inertial Measurement Unit (IMU) in SLAM. An IMU error term is integrated with the landmark reprojection error in a fully probabilistic manner, resulting to a joint non-linear cost function to be optimized. Employing the powerful concept of ‘keyframes ’ we partially marginalize old states to maintain a bounded-sized optimization window, ensuring real-time operation. Comparing against both vision-only and loosely-coupled visual-inertial algorithms, our experiments confirm the benefits of tight fusion in terms of accuracy and robustness. I

    C-blox: A Scalable and Consistent TSDF-based Dense Mapping Approach

    Full text link
    In many applications, maintaining a consistent dense map of the environment is key to enabling robotic platforms to perform higher level decision making. Several works have addressed the challenge of creating precise dense 3D maps from visual sensors providing depth information. However, during operation over longer missions, reconstructions can easily become inconsistent due to accumulated camera tracking error and delayed loop closure. Without explicitly addressing the problem of map consistency, recovery from such distortions tends to be difficult. We present a novel system for dense 3D mapping which addresses the challenge of building consistent maps while dealing with scalability. Central to our approach is the representation of the environment as a collection of overlapping TSDF subvolumes. These subvolumes are localized through feature-based camera tracking and bundle adjustment. Our main contribution is a pipeline for identifying stable regions in the map, and to fuse the contributing subvolumes. This approach allows us to reduce map growth while still maintaining consistency. We demonstrate the proposed system on a publicly available dataset and simulation engine, and demonstrate the efficacy of the proposed approach for building consistent and scalable maps. Finally we demonstrate our approach running in real-time on-board a lightweight MAV.Comment: 8 pages, 5 figures, conferenc

    Monocular Parallel Tracking and Mapping with Odometry Fusion for MAV Navigation in Feature-Lacking Environments

    Get PDF
    ©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Presented at the IEEE/RSJ International Workshop on Vision-based Closed-Loop Control and Navigation of Micro Helicopters in GPS-denied Environments (IROS 2013), November 7, 2013, Tokyo, Japan.Despite recent progress, autonomous navigation on Micro Aerial Vehicles with a single frontal camera is still a challenging problem, especially in feature-lacking environ- ments. On a mobile robot with a frontal camera, monoSLAM can fail when there are not enough visual features in the scene, or when the robot, with rotationally dominant motions, yaws away from a known map toward unknown regions. To overcome such limitations and increase responsiveness, we present a novel parallel tracking and mapping framework that is suitable for robot navigation by fusing visual data with odometry measurements in a principled manner. Our framework can cope with a lack of visual features in the scene, and maintain robustness during pure camera rotations. We demonstrate our results on a dataset captured from the frontal camera of a quad- rotor flying in a typical feature-lacking indoor environment

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy

    A Large Scale Inertial Aided Visual Simultaneous Localization And Mapping (SLAM) System For Small Mobile Platforms

    Get PDF
    In this dissertation we present a robust simultaneous mapping and localization scheme that can be deployed on a computationally limited, small unmanned aerial system. This is achieved by developing a key frame based algorithm that leverages the multiprocessing capacity of modern low power mobile processors. The novelty of the algorithm lies in the design to make it robust against rapid exploration while keeping the computational time to a minimum. A novel algorithm is developed where the time critical components of the localization and mapping system are computed in parallel utilizing the multiple cores of the processor. The algorithm uses a scale and rotation invariant state of the art binary descriptor for landmark description making it suitable for compact large scale map representation and robust tracking. This descriptor is also used in loop closure detection making the algorithm efficient by eliminating any need for separate descriptors in a Bag of Words scheme. Effectiveness of the algorithm is demonstrated by performance evaluation in indoor and large scale outdoor dataset. We demonstrate the efficiency and robustness of the algorithm by successful six degree of freedom (6 DOF) pose estimation in challenging indoor and outdoor environment. Performance of the algorithm is validated on a quadcopter with onboard computation
    corecore