578 research outputs found

    LOAM: Lidar Odometry and Mapping in Real-time

    Full text link
    Abstract — We propose a real-time method for odometry and mapping using range measurements from a 2-axis lidar moving in 6-DOF. The problem is hard because the range measurements are received at different times, and errors in motion estimation can cause mis-registration of the resulting point cloud. To date, coherent 3D maps can be built by off-line batch methods, often using loop closure to correct for drift over time. Our method achieves both low-drift and low-computational complexity with-out the need for high accuracy ranging or inertial measurements. The key idea in obtaining this level of performance is the division of the complex problem of simultaneous localization and mapping, which seeks to optimize a large number of variables simultaneously, by two algorithms. One algorithm performs odometry at a high frequency but low fidelity to estimate velocity of the lidar. Another algorithm runs at a frequency of an order of magnitude lower for fine matching and registration of the point cloud. Combination of the two algorithms allows the method to map in real-time. The method has been evaluated by a large set of experiments as well as on the KITTI odometry benchmark. The results indicate that the method can achieve accuracy at the level of state of the art offline batch methods. I

    Simultaneous monocular visual odometry and depth reconstruction with scale recovery

    Get PDF
    In this paper, we propose a deep neural net-work that can estimate camera poses and reconstruct thefull resolution depths of the environment simultaneously usingonly monocular consecutive images. In contrast to traditionalmonocular visual odometry methods, which cannot estimatescaled depths, we here demonstrate the recovery of the scaleinformation using a sparse depth image as a supervision signalin the training step. In addition, based on the scaled depth,the relative poses between consecutive images can be estimatedusing the proposed deep neural network. Another novelty liesin the deployment of view synthesis, which can synthesize anew image of the scene from a different view (camera pose)given an input image. The view synthesis is the core techniqueused for constructing a loss function for the proposed neuralnetwork, which requires the knowledge of the predicted depthsand relative poses, such that the proposed method couples thevisual odometry and depth prediction together. In this way,both the estimated poses and the predicted depths from theneural network are scaled using the sparse depth image as thesupervision signal during training. The experimental results onthe KITTI dataset show competitive performance of our methodto handle challenging environments
    • …
    corecore