40 research outputs found

    Terrain-Dependent Slip Risk Prediction for Planetary Exploration Rovers

    Get PDF
    Wheel slip prediction on rough terrain is crucial for secure, long-term operations of planetary exploration rovers. Although rough, unstructured terrain hampers mobility, prediction by modeling wheel–terrain interactions remains difficult owing to unclear terrain conditions and complexities of terramechanics models. This study proposes a vision-based approach with machine learning for predicting wheel slip risk by estimating the slope from 3D information and classifying terrain types from image information. It considers the slope estimation accuracy for risk prediction under sharp increases in wheel slip due to inclined ground. Experimental results obtained with a rover testbed on several terrain types validate this method

    Learning to visually predict terrain properties for planetary rovers

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 174-180).For future planetary exploration missions, improvements in autonomous rover mobility have the potential to increase scientific data return by providing safe access to geologically interesting sites that lie in rugged terrain, far from landing areas. This thesis presents an algorithmic framework designed to improve rover-based terrain sensing, a critical component of any autonomous mobility system operating in rough terrain. Specifically, this thesis addresses the problem of predicting the mechanical properties of distant terrain. A self-supervised learning framework is proposed that enables a robotic system to learn predictions of mechanical properties of distant terrain, based on measurements of mechanical properties of similar terrain that has been previously traversed. The proposed framework relies on three distinct algorithms. A mechanical terrain characterization algorithm is proposed that computes upper and lower bounds on the net traction force available at a patch of terrain, via a constrained optimization framework. Both model-based and sensor-based constraints are employed. A terrain classification method is proposed that exploits features from proprioceptive sensor data, and employs either a supervised support vector machine (SVM) or unsupervised k-means classifier to assign class labels to terrain patches that the rover has traversed. A second terrain classification method is proposed that exploits features from exteroceptive sensor data (e.g. color and texture), and is automatically trained in a self-supervised manner, based on the outputs of the proprioceptive terrain classifier.(cont.) The algorithm includes a method for distinguishing novel terrain from previously observed terrain. The outputs of these three algorithms are merged to yield a map of the surrounding terrain that is annotated with the expected achievable net traction force. Such a map would be useful for path planning purposes. The algorithms proposed in this thesis have been experimentally validated in an outdoor, Mars-analog environment. The proprioceptive terrain classifier demonstrated 92% accuracy in labeling three distinct terrain classes. The exteroceptive terrain classifier that relies on self-supervised training was shown to be approximately as accurate as a similar, human-supervised classifier, with both achieving 94% correct classification rates on identical data sets. The algorithm for detection of novel terrain demonstrated 89% accuracy in detecting novel terrain in this same environment. In laboratory tests, the mechanical terrain characterization algorithm predicted the lower bound of the net available traction force with an average margin of 21% of the wheel load.by Christopher A. Brooks.Ph.D

    Adapting Monte Carlo Localization to Utilize Floor and Wall Texture Data

    Get PDF
    Monte Carlo Localization (MCL) is an algorithm that allows a robot to determine its location when provided a map of its surroundings. Particles, consisting of a location and an orientation, represent possible positions where the robot could be on the map. The probability of the robot being at each particle is calculated based on sensor input. Traditionally, MCL only utilizes the position of objects for localization. This thesis explores using wall and floor surface textures to help the algorithm determine locations more accurately. Wall textures are captured by using a laser range finder to detect patterns in the surface. Floor textures are determined by using an inertial measurement unit (IMU) to capture acceleration vectors which represent the roughness of the floor. Captured texture data is classified by an artificial neural network and used in probability calculations. The best variations of Texture MCL improved accuracy by 19.1\% and 25.1\% when all particles and the top fifty particles respectively were used to calculate the robot\u27s estimated position. All implementations achieved comparable performance speeds when run in real-time on-board a robot

    Enhancing Rover Teleoperation on the Moon With Proprioceptive Sensors and Machine Learning Techniques

    Get PDF
    Geological formations, environmental conditions, and soil mechanics frequently generate undesired effects on rovers’ mobility, such as slippage or sinkage. Underestimating these undesired effects may compromise the rovers’ operation and lead to a premature end of the mission. Minimizing mobility risks becomes a priority for colonising the Moon and Mars. However, addressing this challenge cannot be treated equally for every celestial body since the control strategies may differ; e.g. the low latency EarthMoon communication allows constant monitoring and controls, something not feasible on Mars. This letter proposes a Hazard Information System (HIS) that estimates the rover’s mobility risks (e.g. slippage) using proprioceptive sensors and Machine Learning (supervised and unsupervised). A Graphical User Interface was created to assist human-teleoperation tasks by presenting mobility risk indicators. The system has been developed and evaluated in the lunar analogue facility (LunaLab) at the University of Luxembourg. A real rover and eight participants were part of the experiments. Results demonstrate the benefits of the HIS in the decision-making processes of the operator’s response to overcome hazardous situations

    Road terrain type classification based on laser measurement system data

    Full text link
    For road vehicles, knowledge of terrain types is useful in improving passenger safety and comfort. The conventional methods are susceptible to vehicle speed variations and in this paper we present a method of using Laser Measurement System (LMS) data for speed independent road type classification. Experiments were carried out with an instrumented road vehicle (CRUISE), by manually driving on a variety of road terrain types namely Asphalt, Concrete, Grass, and Gravel roads at different speeds. A looking down LMS is used for capturing the terrain data. The range data is capable of capturing the structural differences while the remission values are used to observe anomalies in surface reflectance properties. Both measurements are combined and used in a Support Vector Machines Classifier to achieve an average accuracy of 95% on different road types

    TrackletMapper: Ground Surface Segmentation and Mapping from Traffic Participant Trajectories

    Full text link
    Robustly classifying ground infrastructure such as roads and street crossings is an essential task for mobile robots operating alongside pedestrians. While many semantic segmentation datasets are available for autonomous vehicles, models trained on such datasets exhibit a large domain gap when deployed on robots operating in pedestrian spaces. Manually annotating images recorded from pedestrian viewpoints is both expensive and time-consuming. To overcome this challenge, we propose TrackletMapper, a framework for annotating ground surface types such as sidewalks, roads, and street crossings from object tracklets without requiring human-annotated data. To this end, we project the robot ego-trajectory and the paths of other traffic participants into the ego-view camera images, creating sparse semantic annotations for multiple types of ground surfaces from which a ground segmentation model can be trained. We further show that the model can be self-distilled for additional performance benefits by aggregating a ground surface map and projecting it into the camera images, creating a denser set of training annotations compared to the sparse tracklet annotations. We qualitatively and quantitatively attest our findings on a novel large-scale dataset for mobile robots operating in pedestrian areas. Code and dataset will be made available at http://trackletmapper.cs.uni-freiburg.de.Comment: 19 pages, 14 figures, CoRL 2022 v

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces

    Full text link
    In this work we study the problem of exploring surfaces and building compact 3D representations of the environment surrounding a robot through active perception. We propose an online probabilistic framework that merges visual and tactile measurements using Gaussian Random Field and Gaussian Process Implicit Surfaces. The system investigates incomplete point clouds in order to find a small set of regions of interest which are then physically explored with a robotic arm equipped with tactile sensors. We show experimental results obtained using a PrimeSense camera, a Kinova Jaco2 robotic arm and Optoforce sensors on different scenarios. We then demonstrate how to use the online framework for object detection and terrain classification.Comment: 8 pages, 6 figures, external contents (https://youtu.be/0-UlFRQT0JI
    corecore