40 research outputs found

    Analysing the impact of learning inputs - Application to terrain traversability estimation

    Get PDF
    Data-driven approaches such as Gaussian Process (GP) regression have been used extensively in recent robotics literature to achieve estimation by learning from experience. To ensure satisfactory performance, in most cases, multiple learning inputs are required. Intuitively, adding new inputs can often contribute to better estimation accuracy, however, it may come at the cost of a new sensor, larger training dataset and/or more complex learning, some- times for limited benefits. Therefore, it is crucial to have a systematic procedure to determine the actual impact each input has on the estimation performance. To address this issue, in this paper we propose to analyse the impact of each input on the estimate using a variance-based sensitivity analysis method. We propose an approach built on Analysis of Variance (ANOVA) decomposition, which can characterise how the prediction changes as one or more of the input changes, and also quantify the prediction uncertainty as attributed from each of the inputs in the framework of dependent inputs. We apply the proposed approach to a terrain-traversability estimation method we proposed in prior work, which is based on multi-task GP regression, and we validate this implementation experimentally using a rover on a Mars-analogue terrain

    Bayesian Optimisation for Safe Navigation under Localisation Uncertainty

    Full text link
    In outdoor environments, mobile robots are required to navigate through terrain with varying characteristics, some of which might significantly affect the integrity of the platform. Ideally, the robot should be able to identify areas that are safe for navigation based on its own percepts about the environment while avoiding damage to itself. Bayesian optimisation (BO) has been successfully applied to the task of learning a model of terrain traversability while guiding the robot through more traversable areas. An issue, however, is that localisation uncertainty can end up guiding the robot to unsafe areas and distort the model being learnt. In this paper, we address this problem and present a novel method that allows BO to consider localisation uncertainty by applying a Gaussian process model for uncertain inputs as a prior. We evaluate the proposed method in simulation and in experiments with a real robot navigating over rough terrain and compare it against standard BO methods.Comment: To appear in the proceedings of the 18th International Symposium on Robotics Research (ISRR 2017

    Learning to visually predict terrain properties for planetary rovers

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 174-180).For future planetary exploration missions, improvements in autonomous rover mobility have the potential to increase scientific data return by providing safe access to geologically interesting sites that lie in rugged terrain, far from landing areas. This thesis presents an algorithmic framework designed to improve rover-based terrain sensing, a critical component of any autonomous mobility system operating in rough terrain. Specifically, this thesis addresses the problem of predicting the mechanical properties of distant terrain. A self-supervised learning framework is proposed that enables a robotic system to learn predictions of mechanical properties of distant terrain, based on measurements of mechanical properties of similar terrain that has been previously traversed. The proposed framework relies on three distinct algorithms. A mechanical terrain characterization algorithm is proposed that computes upper and lower bounds on the net traction force available at a patch of terrain, via a constrained optimization framework. Both model-based and sensor-based constraints are employed. A terrain classification method is proposed that exploits features from proprioceptive sensor data, and employs either a supervised support vector machine (SVM) or unsupervised k-means classifier to assign class labels to terrain patches that the rover has traversed. A second terrain classification method is proposed that exploits features from exteroceptive sensor data (e.g. color and texture), and is automatically trained in a self-supervised manner, based on the outputs of the proprioceptive terrain classifier.(cont.) The algorithm includes a method for distinguishing novel terrain from previously observed terrain. The outputs of these three algorithms are merged to yield a map of the surrounding terrain that is annotated with the expected achievable net traction force. Such a map would be useful for path planning purposes. The algorithms proposed in this thesis have been experimentally validated in an outdoor, Mars-analog environment. The proprioceptive terrain classifier demonstrated 92% accuracy in labeling three distinct terrain classes. The exteroceptive terrain classifier that relies on self-supervised training was shown to be approximately as accurate as a similar, human-supervised classifier, with both achieving 94% correct classification rates on identical data sets. The algorithm for detection of novel terrain demonstrated 89% accuracy in detecting novel terrain in this same environment. In laboratory tests, the mechanical terrain characterization algorithm predicted the lower bound of the net available traction force with an average margin of 21% of the wheel load.by Christopher A. Brooks.Ph.D

    Visual Prediction of Rover Slip: Learning Algorithms and Field Experiments

    Get PDF
    Perception of the surrounding environment is an essential tool for intelligent navigation in any autonomous vehicle. In the context of Mars exploration, there is a strong motivation to enhance the perception of the rovers beyond geometry-based obstacle avoidance, so as to be able to predict potential interactions with the terrain. In this thesis we propose to remotely predict the amount of slip, which reflects the mobility of the vehicle on future terrain. The method is based on learning from experience and uses visual information from stereo imagery as input. We test the algorithm on several robot platforms and in different terrains. We also demonstrate its usefulness in an integrated system, onboard a Mars prototype rover in the JPL Mars Yard. Another desirable capability for an autonomous robot is to be able to learn about its interactions with the environment in a fully automatic fashion. We propose an algorithm which uses the robot's sensors as supervision for vision-based learning of different terrain types. This algorithm can work with noisy and ambiguous signals provided from onboard sensors. To be able to cope with rich, high-dimensional visual representations we propose a novel, nonlinear dimensionality reduction technique which exploits automatic supervision. The method is the first to consider supervised nonlinear dimensionality reduction in a probabilistic framework using supervision which can be noisy or ambiguous. Finally, we consider the problem of learning to recognize different terrains, which addresses the time constraints of an onboard autonomous system. We propose a method which automatically learns a variable-length feature representation depending on the complexity of the classification task. The proposed approach achieves a good trade-off between decrease in computational time and recognition performance.</p

    An Intelligent Architecture for Legged Robot Terrain Classification Using Proprioceptive and Exteroceptive Data

    Get PDF
    In this thesis, we introduce a novel architecture called Intelligent Architecture for Legged Robot Terrain Classification Using Proprioceptive and Exteroceptive Data (iARTEC ) . The proposed architecture integrates different terrain characterization and classification with other robotic system components. Within iARTEC , we consider the problem of having a legged robot autonomously learn to identify different terrains. Robust terrain identification can be used to enhance the capabilities of legged robot systems, both in terms of locomotion and navigation. For example, a robot that has learned to differentiate sand from gravel can autonomously modify (or even select a different) path in favor of traversing over a better terrain. The same knowledge of the terrain type can also be used to guide a robot in order to avoid specific terrains. To tackle this problem, we developed four approaches for terrain characterization, classification, path planning, and control for a mobile legged robot. We developed a particle system inspired approach to estimate the robot footâ ground contact interaction forces. The approach is derived from the well known Bekkerâ s theory to estimate the contact forces based on its point contact model concepts. It is realistically model real-time 3-dimensional contact behaviors between rigid body objects and the soil. For a real-time capable implementation of this approach, its reformulated to use a lookup table generated from simple contact experiments of the robot foot with the terrain. Also, we introduced a short-range terrain classifier using the robot embodied data. The classifier is based on a supervised machine learning approach to optimize the classifier parameters and terrain it using proprioceptive sensor measurements. The learning framework preprocesses sensor data through channel reduction and filtering such that the classifier is trained on the feature vectors that are closely associated with terrain class. For the long-range terrain type prediction using the robot exteroceptive data, we present an online visual terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs). In addition, we described a terrain dependent navigation and path planning approach that is based on E* planer and employs a proposed metric that specifies the navigation costs associated terrain types. This generated path naturally avoids obstacles and favors terrains with lower values of the metric. At the low level, a proportional input-scaling controller is designed and implemented to autonomously steer the robot to follow the desired path in a stable manner. iARTEC performance was tested and validated experimentally using several different sensing modalities (proprioceptive and exteroceptive) and on the six legged robotic platform CREX. The results show that the proposed architecture integrating the aforementioned approaches with the robotic system allowed the robot to learn both robot-terrain interaction and remote terrain perception models, as well as the relations linking those models. This learning mechanism is performed according to the robot own embodied data. Based on the knowledge available, the approach makes use of the detected remote terrain classes to predict the most probable navigation behavior. With the assigned metric, the performance of the robot on a given terrain is predicted. This allows the navigation of the robot to be influenced by the learned models. Finally, we believe that iARTEC and the methods proposed in this thesis can likely also be implemented on other robot types (such as wheeled robots), although we did not test this option in our work

    An embarrassingly simple approach for visual navigation of forest environments

    Get PDF
    Navigation in forest environments is a challenging and open problem in the area of field robotics. Rovers in forest environments are required to infer the traversability of a priori unknown terrains, comprising a number of different types of compliant and rigid obstacles, under varying lighting and weather conditions. The challenges are further compounded for inexpensive small-sized (portable) rovers. While such rovers may be useful for collaboratively monitoring large tracts of forests as a swarm, with low environmental impact, their small-size affords them only a low viewpoint of their proximal terrain. Moreover, their limited view may frequently be partially occluded by compliant obstacles in close proximity such as shrubs and tall grass. Perhaps, consequently, most studies on off-road navigation typically use large-sized rovers equipped with expensive exteroceptive navigation sensors. We design a low-cost navigation system tailored for small-sized forest rovers. For navigation, a light-weight convolution neural network is used to predict depth images from RGB input images from a low-viewpoint monocular camera. Subsequently, a simple coarse-grained navigation algorithm aggregates the predicted depth information to steer our mobile platform towards open traversable areas in the forest while avoiding obstacles. In this study, the steering commands output from our navigation algorithm direct an operator pushing the mobile platform. Our navigation algorithm has been extensively tested in high-fidelity forest simulations and in field trials. Using no more than a 16 × 16 pixel depth prediction image from a 32 × 32 pixel RGB image, our algorithm running on a Raspberry Pi was able to successfully navigate a total of over 750 m of real-world forest terrain comprising shrubs, dense bushes, tall grass, fallen branches, fallen tree trunks, small ditches and mounds, and standing trees, under five different weather conditions and four different times of day. Furthermore, our algorithm exhibits robustness to changes in the mobile platform’s camera pitch angle, motion blur, low lighting at dusk, and high-contrast lighting conditions

    Computing fast search heuristics for physics-based mobile robot motion planning

    Get PDF
    Mobile robots are increasingly being employed to assist responders in search and rescue missions. Robots have to navigate in dangerous areas such as collapsed buildings and hazardous sites, which can be inaccessible to humans. Tele-operating the robots can be stressing for the human operators, which are also overloaded with mission tasks and coordination overhead, so it is important to provide the robot with some degree of autonomy, to lighten up the task for the human operator and also to ensure robot safety. Moving robots around requires reasoning, including interpretation of the environment, spatial reasoning, planning of actions (motion), and execution. This is particularly challenging when the environment is unstructured, and the terrain is \textit{harsh}, i.e. not flat and cluttered with obstacles. Approaches reducing the problem to a 2D path planning problem fall short, and many of those who reason about the problem in 3D don't do it in a complete and exhaustive manner. The approach proposed in this thesis is to use rigid body simulation to obtain a more truthful model of the reality, i.e. of the interaction between the robot and the environment. Such a simulation obeys the laws of physics, takes into account the geometry of the environment, the geometry of the robot, and any dynamic constraints that may be in place. The physics-based motion planning approach by itself is also highly intractable due to the computational load required to perform state propagation combined with the exponential blowup of planning; additionally, there are more technical limitations that disallow us to use things such as state sampling or state steering, which are known to be effective in solving the problem in simpler domains. The proposed solution to this problem is to compute heuristics that can bias the search towards the goal, so as to quickly converge towards the solution. With such a model, the search space is a rich space, which can only contain states which are physically reachable by the robot, and also tells us enough information about the safety of the robot itself. The overall result is that by using this framework the robot engineer has a simpler job of encoding the \textit{domain knowledge} which now consists only of providing the robot geometric model plus any constraints

    Safe Robot Planning and Control Using Uncertainty-Aware Deep Learning

    Get PDF
    In order for robots to autonomously operate in novel environments over extended periods of time, they must learn and adapt to changes in the dynamics of their motion and the environment. Neural networks have been shown to be a versatile and powerful tool for learning dynamics and semantic information. However, there is reluctance to deploy these methods on safety-critical or high-risk applications, since neural networks tend to be black-box function approximators. Therefore, there is a need for investigation into how these machine learning methods can be safely leveraged for learning-based controls, planning, and traversability. The aim of this thesis is to explore methods for both establishing safety guarantees as well as accurately quantifying risks when using deep neural networks for robot planning, especially in high-risk environments. First, we consider uncertainty-aware Bayesian Neural Networks for adaptive control, and introduce a method for guaranteeing safety under certain assumptions. Second, we investigate deep quantile regression learning methods for learning time-and-state varying uncertainties, which we use to perform trajectory optimization with Model Predictive Control. Third, we introduce a complete framework for risk-aware traversability and planning, which we use to enable safe exploration of extreme environments. Fourth, we again leverage deep quantile regression and establish a method for accurately learning the distribution of traversability risks in these environments, which can be used to create safety constraints for planning and control.Ph.D

    Vision-based terrain classification and classifier fusion for planetary exploration rovers

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2006.Includes bibliographical references (leaves 63-66).Autonomous rover operation plays a key role in planetary exploration missions. Rover systems require more and more autonomous capabilities to improve efficiency and robustness. Rover mobility is one of the critical components that can directly affect mission success. Knowledge of the physical properties of the terrain surrounding a planetary exploration rover can be used to allow a rover system to fully exploit its mobility capabilities. Here a study of multi-sensor terrain classification for planetary rovers in Mars and Mars-like environments is presented. Supervised classification algorithms for color, texture, and range features are presented based on mixture of Gaussians modeling. Two techniques for merging the results of these "low level" classifiers are presented that rely on Bayesian fusion and meta-classifier fusion. The performances of these algorithms are studied using images from NASA's Mars Exploration Rover mission and through experiments on a four-wheeled test-bed rover operating in Mars-analog terrain. It is shown that accurate terrain classification can be achieved via classifier fusion from visual features.by Ibrahim Halatci.S.M
    corecore