96 research outputs found

    Learning Ground Traversability from Simulations

    Full text link
    Mobile ground robots operating on unstructured terrain must predict which areas of the environment they are able to pass in order to plan feasible paths. We address traversability estimation as a heightmap classification problem: we build a convolutional neural network that, given an image representing the heightmap of a terrain patch, predicts whether the robot will be able to traverse such patch from left to right. The classifier is trained for a specific robot model (wheeled, tracked, legged, snake-like) using simulation data on procedurally generated training terrains; the trained classifier can be applied to unseen large heightmaps to yield oriented traversability maps, and then plan traversable paths. We extensively evaluate the approach in simulation on six real-world elevation datasets, and run a real-robot validation in one indoor and one outdoor environment.Comment: Webpage: http://romarcg.xyz/traversability_estimation

    A Near-to-Far Learning Framework for Terrain Characterization Using an Aerial/Ground-Vehicle Team

    Get PDF
    In this thesis, a novel framework for adaptive terrain characterization of untraversed far terrain in a natural outdoor setting is presented. The system learns the association between visual appearance of different terrain and the proprioceptive characteristics of that terrain in a self-supervised framework. The proprioceptive characteristics of the terrain are acquired by inertial sensors recording measurements of one second traversals that are mapped into the frequency domain and later through a clustering technique classified into discrete proprioceptive classes. Later, these labels are used as training inputs to the adaptive visual classifier. The visual classifier uses images captured by an aerial vehicle scouting ahead of the ground vehicle and extracts local and global descriptors from image patches. An incremental SVM is utilized on the set of images and training sets as they are grabbed sequentially. The framework proposed in this thesis has been experimentally validated in an outdoor environment. We compare the results of the adaptive approach with the offline a priori classification approach and yield an average 12% increase in accuracy results on outdoor settings. The adaptive classifier gradually learns the association between characteristics and visual features of new terrain interactions and modifies the decision boundaries

    Adaptive and intelligent navigation of autonomous planetary rovers - A survey

    Get PDF
    The application of robotics and autonomous systems in space has increased dramatically. The ongoing Mars rover mission involving the Curiosity rover, along with the success of its predecessors, is a key milestone that showcases the existing capabilities of robotic technology. Nevertheless, there has still been a heavy reliance on human tele-operators to drive these systems. Reducing the reliance on human experts for navigational tasks on Mars remains a major challenge due to the harsh and complex nature of the Martian terrains. The development of a truly autonomous rover system with the capability to be effectively navigated in such environments requires intelligent and adaptive methods fitting for a system with limited resources. This paper surveys a representative selection of work applicable to autonomous planetary rover navigation, discussing some ongoing challenges and promising future research directions from the perspectives of the authors

    Traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data

    Get PDF
    Scene perception and traversability analysis are real challenges for autonomous driving systems. In the context of off-road autonomy, there are additional challenges due to the unstructured environments and the existence of various vegetation types. It is necessary for the Autonomous Ground Vehicles (AGVs) to be able to identify obstacles and load-bearing surfaces in the terrain to ensure a safe navigation (McDaniel et al. 2012). The presence of vegetation in off-road autonomy applications presents unique challenges for scene understanding: 1) understory vegetation makes it difficult to detect obstacles or to identify load-bearing surfaces; and 2) trees are usually regarded as obstacles even though only trunks of the trees pose collision risk in navigation. The overarching goal of this dissertation was to study traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data. More specifically, to address the aforementioned challenges, this dissertation studied the impacts of the understory vegetation density on the solid obstacle detection performance of the off-road autonomous systems. By leveraging a physics-based autonomous driving simulator, a classification-based machine learning framework was proposed for obstacle detection based on point cloud data captured by LIDAR. Features were extracted based on a cumulative approach meaning that information related to each feature was updated at each timeframe when new data was collected by LIDAR. It was concluded that the increase in the density of understory vegetation adversely affected the classification performance in correctly detecting solid obstacles. Additionally, a regression-based framework was proposed for estimating the understory vegetation density for safe path planning purposes according to which the traversabilty risk level was regarded as a function of estimated density. Thus, the denser the predicted density of an area, the higher the risk of collision if the AGV traversed through that area. Finally, for the trees in the terrain, the dissertation investigated statistical features that can be used in machine learning algorithms to differentiate trees from solid obstacles in the context of forested off-road scenes. Using the proposed extracted features, the classification algorithm was able to generate high precision results for differentiating trees from solid obstacles. Such differentiation can result in more optimized path planning in off-road applications

    Contrastive Label Disambiguation for Self-Supervised Terrain Traversability Learning in Off-Road Environments

    Full text link
    Discriminating the traversability of terrains is a crucial task for autonomous driving in off-road environments. However, it is challenging due to the diverse, ambiguous, and platform-specific nature of off-road traversability. In this paper, we propose a novel self-supervised terrain traversability learning framework, utilizing a contrastive label disambiguation mechanism. Firstly, weakly labeled training samples with pseudo labels are automatically generated by projecting actual driving experiences onto the terrain models constructed in real time. Subsequently, a prototype-based contrastive representation learning method is designed to learn distinguishable embeddings, facilitating the self-supervised updating of those pseudo labels. As the iterative interaction between representation learning and pseudo label updating, the ambiguities in those pseudo labels are gradually eliminated, enabling the learning of platform-specific and task-specific traversability without any human-provided annotations. Experimental results on the RELLIS-3D dataset and our Gobi Desert driving dataset demonstrate the effectiveness of the proposed method.Comment: 9 pages, 11 figure

    Terrain Aware Traverse Planning for Mars Rovers

    Get PDF
    NASA is proposing a Mars Sample Return mission, to be completed within one Martian year, that will require enhanced autonomy to perform its duties faster, safer, and more efficiently. With its main purpose being to retrieve samples possibly tens of kilometers away, it will need to drive beyond line-of-sight to get to its target more quickly than any rovers before. This research proposes a new methodology to support a sample return mission and is divided into three compo-nents: map preparation (map of traversability, i.e., ability of a terrain to sustain the traversal of a vehicle), path planning (pre-planning and replanning), and terrain analysis. The first component aims at creating a better knowledge of terrain traversability to support planning, by predicting rover slip and drive speed along the traverse using orbital data. By overlapping slope, rock abundance and terrain types at the same location, the expected drive velocity is obtained. By combining slope and thermal data, additional information about the experienced slip is derived, indicating whether it will be low (less than 30%) or medium to high (more than 30%). The second component involves planning the traverse for one Martian day (or sol) at a time, based on the map of expected drive speed. This research proposes to plan, offline, several paths traversable in one sol. Once online, the rover chooses the fastest option (the path cost being calculated using the distance divided by the expected velocity). During its drive, the rover monitors the terrain via analysis of its experienced wheel slip and actual speed. This information is then passed along the different pre-planned paths over a given distance (e.g., 25 m) and the map of traversability is locally updated given this new knowledge. When an update occurs, the rover calculates the new time of arrival of the various paths and replans its route if necessary. When tested in a simulation study on maps of the Columbia Hills, Mars, the rover successfully updates the map given new information drawn from a modified map used as ground truth for simulation purposes and replans its traverse when needed. The third component describes a method to assess the soil in-situ in case of dangerous terrain detected during the map update, or if the monitoring is not enough to confirm the traversability predicted by the map. The rover would deploy a shear vane instrument to compute intrinsic terrain parameters, information then propagated ahead of the rover to update the map and replan if necessary. Experiments in a laboratory setting as well as in the field showed promising results, the mounted shear vane giving values close to the expected terrain parameters of the tested soils

    Visual Prediction of Rover Slip: Learning Algorithms and Field Experiments

    Get PDF
    Perception of the surrounding environment is an essential tool for intelligent navigation in any autonomous vehicle. In the context of Mars exploration, there is a strong motivation to enhance the perception of the rovers beyond geometry-based obstacle avoidance, so as to be able to predict potential interactions with the terrain. In this thesis we propose to remotely predict the amount of slip, which reflects the mobility of the vehicle on future terrain. The method is based on learning from experience and uses visual information from stereo imagery as input. We test the algorithm on several robot platforms and in different terrains. We also demonstrate its usefulness in an integrated system, onboard a Mars prototype rover in the JPL Mars Yard. Another desirable capability for an autonomous robot is to be able to learn about its interactions with the environment in a fully automatic fashion. We propose an algorithm which uses the robot's sensors as supervision for vision-based learning of different terrain types. This algorithm can work with noisy and ambiguous signals provided from onboard sensors. To be able to cope with rich, high-dimensional visual representations we propose a novel, nonlinear dimensionality reduction technique which exploits automatic supervision. The method is the first to consider supervised nonlinear dimensionality reduction in a probabilistic framework using supervision which can be noisy or ambiguous. Finally, we consider the problem of learning to recognize different terrains, which addresses the time constraints of an onboard autonomous system. We propose a method which automatically learns a variable-length feature representation depending on the complexity of the classification task. The proposed approach achieves a good trade-off between decrease in computational time and recognition performance.</p

    Study of Mobile Robot Operations Related to Lunar Exploration

    Get PDF
    Mobile robots extend the reach of exploration in environments unsuitable, or unreachable, by humans. Far-reaching environments, such as the south lunar pole, exhibit lighting conditions that are challenging for optical imagery required for mobile robot navigation. Terrain conditions also impact the operation of mobile robots; distinguishing terrain types prior to physical contact can improve hazard avoidance. This thesis presents the conclusions of a trade-off that uses the results from two studies related to operating mobile robots at the lunar south pole. The lunar south pole presents engineering design challenges for both tele-operation and lidar-based autonomous navigation in the context of a near-term, low-cost, short-duration lunar prospecting mission. The conclusion is that direct-drive tele-operation may result in improved science data return. The first study is on demonstrating lidar reflectance intensity, and near-infrared spectroscopy, can improve terrain classification over optical imagery alone. Two classification techniques, Naive Bayes and multi-class SVM, were compared for classification errors. Eight terrain types, including aggregate, loose sand and compacted sand, are classified using wavelet-transformed optical images, and statistical values of lidar reflectance intensity. The addition of lidar reflectance intensity was shown to reduce classification errors for both classifiers. Four types of aggregate material are classified using statistical values of spectral reflectance. The addition of spectral reflectance was shown to reduce classification errors for both classifiers. The second study is on human performance in tele-operating a mobile robot over time-delay and in lighting conditions analogous to the south lunar pole. Round-trip time delay between operator and mobile robot leads to an increase in time to turn the mobile robot around obstacles or corners as operators tend to implement a `wait and see\u27 approach. A study on completion time for a cornering task through varying corridor widths shows that time-delayed performance fits a previously established cornering law, and that varying lighting conditions did not adversely affect human performance. The results of the cornering law are interpreted to quantify the additional time required to negotiate a corner under differing conditions, and this increase in time can be interpreted to be predictive when operating a mobile robot through a driving circuit

    Adaptive Multi-sensor Perception for Driving Automation in Outdoor Contexts

    Get PDF
    In this research, adaptive perception for driving automation is discussed so as to enable a vehicle to automatically detect driveable areas and obstacles in the scene. It is especially designed for outdoor contexts where conventional perception systems that rely on a priori knowledge of the terrain's geometric properties, appearance properties, or both, is prone to fail, due to the variability in the terrain properties and environmental conditions. In contrast, the proposed framework uses a self-learning approach to build a model of the ground class that is continuously adjusted online to reflect the latest ground appearance. The system also features high flexibility, as it can work using a single sensor modality or a multi-sensor combination. In the context of this research, different embodiments have been demonstrated using range data coming from either a radar or a stereo camera, and adopting self-supervised strategies where monocular vision is automatically trained by radar or stereo vision. A comprehensive set of experimental results, obtained with different ground vehicles operating in the field, are presented to validate and assess the performance of the system
    corecore