27 research outputs found

    Datasets and Benchmarking of a path planning pipeline for planetary rovers

    Get PDF
    We present datasets of 2.5D elevation maps of planetary environment that were collected on Mt. Etna during the space-analogous ARCHES mission [1]. In addition to the raw elevation maps, we provide cost maps that encode the traversibility of the terrain. We demonstrate how these cost maps are used during our development of mapping and planning algorithms for ground based robots in the context of the planetary rover navigation. More specifically, we use the benchmarking pipeline to evaluate the parameters and choice of methods that are used for the 2.5D cost map generation, which in turn affects the path planning behavior. Finally, we showcase how the provided maps can be supplied as a test environment in Bench-MR, which is a framework for benchmarking of motion planning algorithms for wheeled robots

    Gaussian Process Gradient Maps for Loop-Closure Detection in Unstructured Planetary Environments

    Get PDF
    The ability to recognize previously mapped locations is an essential feature for autonomous systems. Unstructured planetary-like environments pose a major challenge to these systems due to the similarity of the terrain. As a result, the ambiguity of the visual appearance makes state-of-the-art visual place recognition approaches less effective than in urban or man-made environments. This paper presents a method to solve the loop closure problem using only spatial information. The key idea is to use a novel continuous and probabilistic representations of terrain elevation maps. Given 3D point clouds of the environment, the proposed approach exploits Gaussian Process (GP) regression with linear operators to generate continuous gradient maps of the terrain elevation information. Traditional image registration techniques are then used to search for potential matches. Loop closures are verified by leveraging both the spatial characteristic of the elevation maps (SE(2) registration) and the probabilistic nature of the GP representation. A submap-based localization and mapping framework is used to demonstrate the validity of the proposed approach. The performance of this pipeline is evaluated and benchmarked using real data from a rover that is equipped with a stereo camera and navigates in challenging, unstructured planetary-like environments in Morocco and on Mt. Etna

    GPGM-SLAM: a Robust SLAM System for Unstructured Planetary Environments with Gaussian Process Gradient Maps

    Get PDF
    Simultaneous Localization and Mapping (SLAM) techniques play a key role towards long-term autonomy of mobile robots due to the ability to correct localization errors and produce consistent maps of an environment over time. Contrarily to urban or man-made environments, where the presence of unique objects and structures offer unique cues for localization, the apperance of unstructured natural environments is often ambiguous and self-similar, hindering the performances of loop closure detection. In this paper, we present an approach to improve the robustness of place recognition in the context of a submap-based stereo SLAM based on Gaussian Process Gradient Maps (GPGMaps). GPGMaps embed a continuous representation of the gradients of the local terrain elevation by means of Gaussian Process regression and Structured Kernel Interpolation, given solely noisy elevation measurements. We leverage the imagelike structure of GPGMaps to detect loop closures using traditional visual features and Bag of Words. GPGMap matching is performed as an SE(2) alignment to establish loop closure constraints within a pose graph. We evaluate the proposed pipeline on a variety of datasets recorded on Mt. Etna, Sicily and in the Morocco desert, respectively Moon- and Mars-like environments, and we compare the localization performances with state-of-the-art approaches for visual SLAM and visual loop closure detection

    Robust place recognition with Gaussian Process Gradient Maps for teams of robotic explorers in challenging lunar environments

    Get PDF
    Teams of mobile robots will play a key role towards future planetary exploration missions. In fact, plans for upcoming lunar exploration, and other extraterrestrial bodies, foresee an extensive usage of robots for the purposes of in-situ analysis, building infrastructure and realizing maps of the environment for its exploitation. To enable prolonged robotic autonomy, however, it is critical for the robotic agents to be able to robustly localize themselves during their motion and, concurrently, to produce maps of the environment. To this end, visual SLAM (Simultaneous Localization and Mapping) techniques have been developed during the years and found successful application in several terrestrial fields, such as autonomous driving, automated construction and agricultural robotics. To this day, autonomous navigation has been demonstrated in various robotic missions to Mars, e.g., from NASA's Mars Exploration Rover (MER) Missions, to NASA's Mars Science Laboratory (Curiosity) and the current Mars2020 Perseverance, thanks to the implementation of Visual Odometry, using cameras to robustly estimate the rover's ego-motion. While VO techniques enable the traversal of large distances from one scientific target to the other, future operations, e.g., for building or maintenance of infrastructure, will require robotic agents to repeatedly visit the same environment. In this case, the ability to re-localize themselves with respect to previously visited places, and therefore the ability to create consistent maps of the environment, is paramount to achieve localization accuracies, that are far above what is achievable from global localization approaches. The planetary environment, however, poses significant challenges to this goal, due to extreme lighting conditions, severe visual aliasing and a lack of uniquely identifiable natural "features". For this reason, we developed an approach for re-localization and place recognition, that relies on Gaussian Processes, to efficiently represent portions of the local terrain elevation, named "GPGMaps" (Gaussian Process Gradient Maps), and to use its gradient in conjunction with traditional visual matching techniques. In this paper, we demonstrate, analyze and report the performances of our SLAM approach, based on GPGMaps, during the 2022 ARCHES (Autonomous Robotic Networks to Help Modern Societies) mission, that took place on the volcanic ash slopes of Mt. Etna, Sicily, a designated planetary analogous environment. The proposed SLAM system has been deployed for real-time usage on a robotic team that includes the LRU (Lightweight Rover Unit), a planetary-like rover with high autonomy, perceptual and locomotion capabilities, to demonstrate enabling technologies for future lunar applications

    Testing for the MMX Rover Autonomous Navigation Experiment on Phobos

    Get PDF
    The MMX rover will explore the surface of Phobos, Mars´ bigger moon. It will use its stereo cameras for perceiving the environment, enabling the use of vision based autonomous navigation algorithms. The German Aerospace Center (DLR) is currently developing the corresponding autonomous navigation experiment that will allow the rover to efficiently explore the surface of Phobos, despite limited communication with Earth and long turn-around times for operations. This paper discusses our testing strategy regarding the autonomous navigation solution. We present our general testing strategy for the software considering a development approach with agile aspects. We detail, how we ensure successful integration with the rover system despite having limited access to the flight hardware. We furthermore discuss, what environmental conditions on Phobos pose a potential risk for the navigation algorithms and how we test for these accordingly. Our testing is mostly data set-based and we describe our approaches for recording navigation data that is representative both for the rover system and also for the Phobos environment. Finally, we make the corresponding data set publicly available and provide an overview on its content

    Mobility on the Surface of Phobos for the MMX Rover - Simulation-aided Movement planning

    Get PDF
    The MMX Rover, recently named IDEFIX, will be the first wheeled robotic system to be operated in a milli-g environment. The mobility in this environment, particularly in combination with the interrupted communication schedule and the activation of on-board autonomous functions such as attitude control requires efficient planning. The Mobility Group within the MMX Rovers Team is tasked with proposing optimal solutions to move the rover safely and efficiently to its destination so that it may achieve its scientific goals. These movements combine various commands to the locomotion system and to the navigation systems developed by both institutions. In the mission's early phase, these actions will rely heavily on manual driving commands to the locomotion system until the rover behavior and environment assumptions are confirmed. Planning safe and efficient rover movements is a multi-step process. This paper focuses on the challenges and limitations in sequencing movements for a Rover on Phobos in the context of the MMX Mission. The context in which this process takes place is described in terms of available data and operational constraints

    Preliminary Results for the Multi-Robot, Multi-Partner, Multi-Mission, Planetary Exploration Analogue Campaign on Mount Etna

    Get PDF
    This paper was initially intended to report on the outcome of the twice postponed demonstration mission of the ARCHES project. Due to the global COVID pandemic, it has been postponed from 2020, then 2021, to 2022. Nevertheless, the development of our concepts and integration has progressed rapidly, and some of the preliminary results are worthwhile to share with the community to drive the dialog on robotics planetary exploration strategies. This paper includes an overview of the planned 4-week campaign, as well as the vision and relevance of the missiontowards the planned official space missions. Furthermore, the cooperative aspect of the robotic teams, the scientific motivation, the sub task achievements are summarised

    Finally! Insights into the ARCHES Lunar Planetary Exploration Analogue Campaign on Etna in summer 2022

    Get PDF
    This paper summarises the first outcomes of the space demonstration mission of the ARCHES project which could have been performed this year from 13 june until 10 july on Italy’s Mt. Etna in Sicily. After the second postponement related to COVID from the initially for 2020 planed campaign, we are now very happy to report, that the whole campaign with more than 65 participants for four weeks has been successfully conduced. In this short overview paper, we will refer to all other publication here on IAC22. This paper includes an overview of the performed 4-week campaign and the achieved mission goals and first results but also share our findings on the organisational and planning aspects

    Topological map for biologically inspired vision based navigation

    No full text
    As a step towards achieving a mobile robot which can navigate autonomously, this work aims to develop a method to create a topological map for biologically inspired viewframe based navigation. The developments made in computer vision and the reducing cost of cameras have lead to research efforts in appearance based mapping and navigation for mobile robots. The idea to use biological navigation concepts is inspired from the reason that insects like ants and bees are able to navigate autonomously and robustly in their environments mostly using vision as their primary sensor in spite of having limited computational abilities and a small brain. This can come in handy for small robots like flying robots which are limited in computational and memory resources. Locations in the environment have been represented using a viewframe that is identified by visually inspecting the environment at a particular location and consists of a set of landmark views where each landmark view denotes a landmark observation (ID, descriptor and angle). BRISK features extracted from omnidirectional images were used to detect and identify these landmarks in the environment. A method to build a topology map by using dimensionality reduction techniques was implemented and evaluated using this viewframes information. Two measures namely overlap similarity measure and bearing angle dissimilarity measure have been implemented and used to represent the dissimilarity between any two viewframes. Using these measures, a dissimilarity matrix was computed and provided as input to the dimensionality reduction techniques. Also, two quality measures namely the connectivity measure and isometry measure were introduced for evaluating the generated map quality. Experiments were conducted in simulation for a set of robotic trails under varying sensor and environmental parameters. Also, experiments were conducted on the Pioneer 3-DX robot in an indoor lab environment. Results show the success of the approach in building a topological map automatically using only landmarks information. The resultant map was used for homing experiments and was successful in identifying shorter and optimum paths as compared to its previous version that is Trail-Map which only preserves relationship between adjacent locations

    Experimental Evaluation and Improvement of a Viewframe-Based Navigation Method

    No full text
    Insects like ants and bees navigate robustly in their environments in spite of their small brains using vision as their primary sensor. Inspired by this, researchers at DLR are working on a range free navigation system using visual features. This ability is specially useful for autonomous navigation in large environments and also for computationally limited small robots. Each location in the environment is represented as a viewframe. A viewframe is a set of landmark observations where each landmark observation contains landmark I.D, descriptor and corresponding angle with respect to the robot’s location. Binary Robust Invariant Scalable Keypoints (BRISK) features extracted from the omnidirectional images were used as landmarks in this work. The environment is represented as a Trail-Map which preserves the relationship between adjacent viewframes and is efficient at both storing and pruning the map size when required. This work experimentally evaluates the current system and improves it. In this work, as an extension to the Trail-Map representation, topological knowledge was extracted with the help of dimensionality reduction techniques and by defining dissimilarity measures between any two viewframes. Using this topological knowledge, a pose graph is developed adding edges between viewframes based on how close they are in addition to the adjacency connections. With the help of this map, shorter paths were identified for homing. The topological mapping pipeline was implemented on the robot and experiments were performed in both indoor and outdoor environments. The performance of different dissimilarity measures and dimensionality reduction techniques in building a topological map of viewframes was evaluated. The experiments showed that using this pose graph representation, the robot could take shorter paths which are a subset of the long exploration paths by using the intersections of the paths
    corecore