124 research outputs found
Adaptive and intelligent navigation of autonomous planetary rovers - A survey
The application of robotics and autonomous systems in space has increased dramatically. The ongoing Mars rover mission involving the Curiosity rover, along with the success of its predecessors, is a key milestone that showcases the existing capabilities of robotic technology. Nevertheless, there has still been a heavy reliance on human tele-operators to drive these systems. Reducing the reliance on human experts for navigational tasks on Mars remains a major challenge due to the harsh and complex nature of the Martian terrains. The development of a truly autonomous rover system with the capability to be effectively navigated in such environments requires intelligent and adaptive methods fitting for a system with limited resources. This paper surveys a representative selection of work applicable to autonomous planetary rover navigation, discussing some ongoing challenges and promising future research directions from the perspectives of the authors
Learning Off-Road Terrain Traversability with Self-Supervisions Only
Estimating the traversability of terrain should be reliable and accurate in
diverse conditions for autonomous driving in off-road environments. However,
learning-based approaches often yield unreliable results when confronted with
unfamiliar contexts, and it is challenging to obtain manual annotations
frequently for new circumstances. In this paper, we introduce a method for
learning traversability from images that utilizes only self-supervision and no
manual labels, enabling it to easily learn traversability in new circumstances.
To this end, we first generate self-supervised traversability labels from past
driving trajectories by labeling regions traversed by the vehicle as highly
traversable. Using the self-supervised labels, we then train a neural network
that identifies terrains that are safe to traverse from an image using a
one-class classification algorithm. Additionally, we supplement the limitations
of self-supervised labels by incorporating methods of self-supervised learning
of visual representations. To conduct a comprehensive evaluation, we collect
data in a variety of driving environments and perceptual conditions and show
that our method produces reliable estimations in various environments. In
addition, the experimental results validate that our method outperforms other
self-supervised traversability estimation methods and achieves comparable
performances with supervised learning methods trained on manually labeled data.Comment: Accepted to IEEE Robotics and Automation Letters. Our video can be
found at https://bit.ly/3YdKan
WayFAST: Navigation with Predictive Traversability in the Field
We present a self-supervised approach for learning to predict traversable
paths for wheeled mobile robots that require good traction to navigate. Our
algorithm, termed WayFAST (Waypoint Free Autonomous Systems for
Traversability), uses RGB and depth data, along with navigation experience, to
autonomously generate traversable paths in outdoor unstructured environments.
Our key inspiration is that traction can be estimated for rolling robots using
kinodynamic models. Using traction estimates provided by an online receding
horizon estimator, we are able to train a traversability prediction neural
network in a self-supervised manner, without requiring heuristics utilized by
previous methods. We demonstrate the effectiveness of WayFAST through extensive
field testing in varying environments, ranging from sandy dry beaches to forest
canopies and snow covered grass fields. Our results clearly demonstrate that
WayFAST can learn to avoid geometric obstacles as well as untraversable
terrain, such as snow, which would be difficult to avoid with sensors that
provide only geometric data, such as LiDAR. Furthermore, we show that our
training pipeline based on online traction estimates is more data-efficient
than other heuristic-based methods.Comment: Project website with code and videos:
https://mateusgasparino.com/wayfast-traversability-navigation/ Published in
the IEEE Robotics and Automation Letters (RA-L, 2022) Accepted for
presentation in the 2022 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2022
A Near-to-Far Learning Framework for Terrain Characterization Using an Aerial/Ground-Vehicle Team
In this thesis, a novel framework for adaptive terrain characterization of untraversed far terrain in a natural outdoor setting is presented. The system learns the association between visual appearance of different terrain and the proprioceptive characteristics of that terrain in a self-supervised framework. The proprioceptive characteristics of the terrain are acquired by inertial sensors recording measurements of one second traversals that are mapped into the frequency domain and later through a clustering technique classified into discrete proprioceptive classes. Later, these labels are used as training inputs to the adaptive visual classifier. The visual classifier uses images captured by an aerial vehicle scouting ahead of the ground vehicle and extracts local and global descriptors from image patches. An incremental SVM is utilized on the set of images and training sets as they are grabbed sequentially. The framework proposed in this thesis has been experimentally validated in an outdoor environment. We compare the results of the adaptive approach with the offline a priori classification approach and yield an average 12% increase in accuracy results on outdoor settings. The adaptive classifier gradually learns the association between characteristics and visual features of new terrain interactions and modifies the decision boundaries
Unsupervised learning for long-term autonomy
This thesis investigates methods to enable a robot to build and maintain an environment model in an automatic manner. Such capabilities are especially important in long-term autonomy, where robots operate for extended periods of time without human intervention. In such scenarios we can no longer assume that the environment and the models will remain static. Rather changes are expected and the robot needs to adapt to the new, unseen, circumstances automatically. The approach described in this thesis is based on clustering the robot’s sensing information. This provides a compact representation of the data which can be updated as more information becomes available. The work builds on affinity propagation (Frey and Dueck, 2007), a recent clustering method which obtains high quality clusters while only requiring similarities between pairs of points, and importantly, selecting the number of clusters automatically. This is essential for real autonomy as we typically do not know “a priori” how many clusters best represent the data. The contributions of this thesis a three fold. First a self-supervised method capable of learning a visual appearance model in long-term autonomy settings is presented. Secondly, affinity propagation is extended to handle multiple sensor modalities, often occurring in robotics, in a principle way. Third, a method for joint clustering and outlier selection is proposed which selects a user defined number of outlier while clustering the data. This is solved using an extension of affinity propagation as well as a Lagrangian duality approach which provides guarantees on the optimality of the solution
Robot Mapping and Navigation in Real-World Environments
Robots can perform various tasks, such as mapping hazardous sites, taking part in search-and-rescue scenarios, or delivering goods and people. Robots operating in the real world face many challenges on the way to the completion of their mission. Essential capabilities required for the operation of such robots are mapping, localization and navigation. Solving all of these tasks robustly presents a substantial difficulty as these components are usually interconnected, i.e., a robot that starts without any knowledge about the environment must simultaneously build a map, localize itself in it, analyze the surroundings and plan a path to efficiently explore an unknown environment. In addition to the interconnections between these tasks, they highly depend on the sensors used by the robot and on the type of the environment in which the robot operates. For example, an RGB camera can be used in an outdoor scene for computing visual odometry, or to detect dynamic objects but becomes less useful in an environment that does not have enough light for cameras to operate. The software that controls the behavior of the robot must seamlessly process all the data coming from different sensors. This often leads to systems that are tailored to a particular robot and a particular set of sensors. In this thesis, we challenge this concept by developing and implementing methods for a typical robot navigation pipeline that can work with different types of the sensors seamlessly both, in indoor and outdoor environments. With the emergence of new range-sensing RGBD and LiDAR sensors, there is an opportunity to build a single system that can operate robustly both in indoor and outdoor environments equally well and, thus, extends the application areas of mobile robots. The techniques presented in this thesis aim to be used with both RGBD and LiDAR sensors without adaptations for individual sensor models by using range image representation and aim to provide methods for navigation and scene interpretation in both static and dynamic environments. For a static world, we present a number of approaches that address the core components of a typical robot navigation pipeline. At the core of building a consistent map of the environment using a mobile robot lies point cloud matching. To this end, we present a method for photometric point cloud matching that treats RGBD and LiDAR sensors in a uniform fashion and is able to accurately register point clouds at the frame rate of the sensor. This method serves as a building block for the further mapping pipeline. In addition to the matching algorithm, we present a method for traversability analysis of the currently observed terrain in order to guide an autonomous robot to the safe parts of the surrounding environment. A source of danger when navigating difficult to access sites is the fact that the robot may fail in building a correct map of the environment. This dramatically impacts the ability of an autonomous robot to navigate towards its goal in a robust way, thus, it is important for the robot to be able to detect these situations and to find its way home not relying on any kind of map. To address this challenge, we present a method for analyzing the quality of the map that the robot has built to date, and safely returning the robot to the starting point in case the map is found to be in an inconsistent state. The scenes in dynamic environments are vastly different from the ones experienced in static ones. In a dynamic setting, objects can be moving, thus making static traversability estimates not enough. With the approaches developed in this thesis, we aim at identifying distinct objects and tracking them to aid navigation and scene understanding. We target these challenges by providing a method for clustering a scene taken with a LiDAR scanner and a measure that can be used to determine if two clustered objects are similar that can aid the tracking performance. All methods presented in this thesis are capable of supporting real-time robot operation, rely on RGBD or LiDAR sensors and have been tested on real robots in real-world environments and on real-world datasets. All approaches have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software
Visual Prediction of Rover Slip: Learning Algorithms and Field Experiments
Perception of the surrounding environment is an essential tool for intelligent navigation in any autonomous vehicle. In the context of Mars exploration, there is a strong motivation to enhance the perception of the rovers beyond geometry-based obstacle avoidance, so as to be able to predict potential interactions with the terrain. In this thesis we propose to remotely predict the amount of slip, which reflects the mobility of the vehicle on future terrain. The method is based on learning from experience and uses visual information from stereo imagery as input. We test the algorithm on several robot platforms and in different terrains. We also demonstrate its usefulness in an integrated system, onboard a Mars prototype rover in the JPL Mars Yard.
Another desirable capability for an autonomous robot is to be able to learn about its interactions with the environment in a fully automatic fashion. We propose an algorithm which uses the robot's sensors as supervision for vision-based learning of different terrain types. This algorithm can work with noisy and ambiguous signals provided from onboard sensors. To be able to cope with rich, high-dimensional visual representations we propose a novel, nonlinear dimensionality reduction technique which exploits automatic supervision. The method is the first to consider supervised nonlinear dimensionality reduction in a probabilistic framework using supervision which can be noisy or ambiguous.
Finally, we consider the problem of learning to recognize different terrains, which addresses the time constraints of an onboard autonomous system. We propose a method which automatically learns a variable-length feature representation depending on the complexity of the classification task. The proposed approach achieves a good trade-off between decrease in computational time and recognition performance.</p
- …