882 research outputs found

    Planetary Rover Localization Within Orbital Maps

    Get PDF
    This paper introduces an advanced rover localization system suitable for autonomous planetary exploration in the absence of Global Positioning System (GPS) infrastructure. Given an existing terrain map (image and elevation) obtained from satellite imagery and the images provided by the rover stereo camera system, the proposed method determines the best rover location through visual odometry, 3D terrain and hori- zon matching. The system is tested on data retrieved from a 3 km traverse of the Basalt Hills quarry in California where the GPS track is used as ground truth. Experimental results show the system presented here reduces by over 60 the localization error obtained by wheel odometry

    A minimalistic approach to appearance-based visual SLAM

    Get PDF
    This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the “flat floor assumption” to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses

    Precise pose estimation of the NASA Mars 2020 Perseverance rover through a stereo-vision-based approach

    Get PDF
    Visual Odometry (VO) is a fundamental technique to enhance the navigation capabilities of planetary exploration rovers. By processing the images acquired during the motion, VO methods provide estimates of the relative position and attitude between navigation steps with the detection and tracking of two-dimensional (2D) image keypoints. This method allows one to mitigate trajectory inconsistencies associated with slippage conditions resulting from dead-reckoning techniques. We present here an independent analysis of the high-resolution stereo images of the NASA Mars 2020 Perseverance rover to retrieve its accurate localization on sols 65, 66, 72, and 120. The stereo pairs are processed by using a 3D-to-3D stereo-VO approach that is based on consolidated techniques and accounts for the main nonlinear optical effects characterizing real cameras. The algorithm is first validated through the analysis of rectified stereo images acquired by the NASA Mars Exploration Rover Opportunity, and then applied to the determination of Perseverance's path. The results suggest that our reconstructed path is consistent with the telemetered trajectory, which was directly retrieved onboard the rover's system. The estimated pose is in full agreement with the archived rover's position and attitude after short navigation steps. Significant differences (~10–30 cm) between our reconstructed and telemetered trajectories are observed when Perseverance traveled distances larger than 1 m between the acquisition of stereo pairs

    Skyline matching: absolute localisation for planetary exploration rovers

    Get PDF
    Skyline matching is a technique for absolute localisation framed in the category of autonomous long-range exploration. Absolute localisation becomes crucial for planetary exploration to recalibrate position during long traverses or to estimate position with no a-priori information. In this project, a skyline matching algorithm is proposed, implemented and evaluated using real acquisitions and simulated data. The function is based on comparing the skyline extracted from rover images and orbital data. The results are promising but intensive testing on more real data is needed to further characterize the algorithm

    Evaluation of 3D CNN Semantic Mapping for Rover Navigation

    Full text link
    Terrain assessment is a key aspect for autonomous exploration rovers, surrounding environment recognition is required for multiple purposes, such as optimal trajectory planning and autonomous target identification. In this work we present a technique to generate accurate three-dimensional semantic maps for Martian environment. The algorithm uses as input a stereo image acquired by a camera mounted on a rover. Firstly, images are labeled with DeepLabv3+, which is an encoder-decoder Convolutional Neural Networl (CNN). Then, the labels obtained by the semantic segmentation are combined to stereo depth-maps in a Voxel representation. We evaluate our approach on the ESA Katwijk Beach Planetary Rover Dataset.Comment: To be presented at the 7th IEEE International Workshop on Metrology for Aerospace (MetroAerospace

    System Development of an Unmanned Ground Vehicle and Implementation of an Autonomous Navigation Module in a Mine Environment

    Get PDF
    There are numerous benefits to the insights gained from the exploration and exploitation of underground mines. There are also great risks and challenges involved, such as accidents that have claimed many lives. To avoid these accidents, inspections of the large mines were carried out by the miners, which is not always economically feasible and puts the safety of the inspectors at risk. Despite the progress in the development of robotic systems, autonomous navigation, localization and mapping algorithms, these environments remain particularly demanding for these systems. The successful implementation of the autonomous unmanned system will allow mine workers to autonomously determine the structural integrity of the roof and pillars through the generation of high-fidelity 3D maps. The generation of the maps will allow the miners to rapidly respond to any increasing hazards with proactive measures such as: sending workers to build/rebuild support structure to prevent accidents. The objective of this research is the development, implementation and testing of a robust unmanned ground vehicle (UGV) that will operate in mine environments for extended periods of time. To achieve this, a custom skid-steer four-wheeled UGV is designed to operate in these challenging underground mine environments. To autonomously navigate these environments, the UGV employs the use of a Light Detection and Ranging (LiDAR) and tactical grade inertial measurement unit (IMU) for the localization and mapping through a tightly-coupled LiDAR Inertial Odometry via Smoothing and Mapping framework (LIO-SAM). The autonomous navigation module was implemented based upon the Fast likelihood-based collision avoidance with an extension to human-guided navigation and a terrain traversability analysis framework. In order to successfully operate and generate high-fidelity 3D maps, the system was rigorously tested in different environments and terrain to verify its robustness. To assess the capabilities, several localization, mapping and autonomous navigation missions were carried out in a coal mine environment. These tests allowed for the verification and tuning of the system to be able to successfully autonomously navigate and generate high-fidelity maps

    Robust state estimation methods for robotics applications

    Get PDF
    State estimation is an integral component of any autonomous robotic system. Finding the correct position, velocity, and orientation of an agent in its environment enables it to do other tasks like mapping and interacting with the environment, and collaborating with other agents. State estimation is achieved by using data obtained from multiple sensors and fusing them in a probabilistic framework. These include inertial data from Inertial Measurement Unit (IMU), images from camera, range data from lidars, and positioning data from Global Navigation Satellite Systems (GNSS) receivers. The main challenge faced in sensor-based state estimation is the presence of noisy, erroneous, and even lack of informative data. Some common examples of such situations include wrong feature matching between images or point clouds, false loop-closures due to perceptual aliasing (different places that look similar can confuse the robot), presence of dynamic objects in the environment (odometry algorithms assume a static environment), multipath errors for GNSS (signals for satellites jumping off tall structures like buildings before reaching receivers) and more. This work studies existing and new ways of how standard estimation algorithms like the Kalman filter and factor graphs can be made robust to such adverse conditions without losing performance in ideal outlier-free conditions. The first part of this work demonstrates the importance of robust Kalman filters on wheel-inertial odometry for high-slip terrain. Next, inertial data is integrated into GNSS factor graphs to improve the accuracy and robustness of GNSS factor graphs. Lastly, a combined framework for improving the robustness of non-linear least squares and estimating the inlier noise threshold is proposed and tested with point cloud registration and lidar-inertial odometry algorithms followed by an algorithmic analysis of optimizing generalized robust cost functions with factor graphs for GNSS positioning problem

    A High-Precision Calibration Method for Stereo Vision System

    Get PDF
    corecore