960 research outputs found

    Online 3D Mapping and Localization System for Agricultural Robots

    Get PDF
    For an intelligent agricultural robot to reliably operate on a large-scale farm, it is crucial to accurately estimate its pose. In large outdoor environments, 3D LiDAR is a preferred sensor. Urban and agricultural scenarios are characteristically different, where the latter contains many poorly defined objects such as grass and trees with leaves that will generate noisy sensor signals. While state-of-the-art methods of state estimation using LiDAR, such as LiDAR odometry and mapping (LOAM), work well in urban scenarios, they will fail in the agricultural domain. Hence, we propose a mapping and localization system to cope with challenging agricultural scenarios. Our system maintains a high quality global map for subsequent reuses of relocalization or motion planning. This is beneficial as we avoid the unnecessary repetitively mapping process. Our experimental results show that we achieve comparable or better performance in state estimation, localization, and map quality when compared to LOAM

    A Survey on Global LiDAR Localization

    Full text link
    Knowledge about the own pose is key for all mobile robot applications. Thus pose estimation is part of the core functionalities of mobile robots. In the last two decades, LiDAR scanners have become a standard sensor for robot localization and mapping. This article surveys recent progress and advances in LiDAR-based global localization. We start with the problem formulation and explore the application scope. We then present the methodology review covering various global localization topics, such as maps, descriptor extraction, and consistency checks. The contents are organized under three themes. The first is the combination of global place retrieval and local pose estimation. Then the second theme is upgrading single-shot measurement to sequential ones for sequential global localization. The third theme is extending single-robot global localization to cross-robot localization on multi-robot systems. We end this survey with a discussion of open challenges and promising directions on global lidar localization

    Milli-RIO: Ego-Motion Estimation with Low-Cost Millimetre-Wave Radar

    Full text link
    Robust indoor ego-motion estimation has attracted significant interest in the last decades due to the fast-growing demand for location-based services in indoor environments. Among various solutions, frequency-modulated continuous-wave (FMCW) radar sensors in millimeter-wave (MMWave) spectrum are gaining more prominence due to their intrinsic advantages such as penetration capability and high accuracy. Single-chip low-cost MMWave radar as an emerging technology provides an alternative and complementary solution for robust ego-motion estimation, making it feasible in resource-constrained platforms thanks to low-power consumption and easy system integration. In this paper, we introduce Milli-RIO, an MMWave radar-based solution making use of a single-chip low-cost radar and inertial measurement unit sensor to estimate six-degrees-of-freedom ego-motion of a moving radar. Detailed quantitative and qualitative evaluations prove that the proposed method achieves precisions on the order of few centimeters for indoor localization tasks.Comment: Submitted to IEEE Sensors, 9page

    Scalable Estimation of Precision Maps in a MapReduce Framework

    Full text link
    This paper presents a large-scale strip adjustment method for LiDAR mobile mapping data, yielding highly precise maps. It uses several concepts to achieve scalability. First, an efficient graph-based pre-segmentation is used, which directly operates on LiDAR scan strip data, rather than on point clouds. Second, observation equations are obtained from a dense matching, which is formulated in terms of an estimation of a latent map. As a result of this formulation, the number of observation equations is not quadratic, but rather linear in the number of scan strips. Third, the dynamic Bayes network, which results from all observation and condition equations, is partitioned into two sub-networks. Consequently, the estimation matrices for all position and orientation corrections are linear instead of quadratic in the number of unknowns and can be solved very efficiently using an alternating least squares approach. It is shown how this approach can be mapped to a standard key/value MapReduce implementation, where each of the processing nodes operates independently on small chunks of data, leading to essentially linear scalability. Results are demonstrated for a dataset of one billion measured LiDAR points and 278,000 unknowns, leading to maps with a precision of a few millimeters.Comment: ACM SIGSPATIAL'16, October 31-November 03, 2016, Burlingame, CA, US
    • …
    corecore