106 research outputs found
System of Terrain Analysis, Energy Estimation and Path Planning for Planetary Exploration by Robot Teams
NASA’s long term plans involve a return to manned moon missions, and eventually sending humans to mars. The focus of this project is the use of autonomous mobile robotics to enhance these endeavors. This research details the creation of a system of terrain classification, energy of traversal estimation and low cost path planning for teams of inexpensive and potentially expendable robots.
The first stage of this project was the creation of a model which estimates the energy requirements of the traversal of varying terrain types for a six wheel rocker-bogie rover. The wheel/soil interaction model uses Shibly’s modified Bekker equations and incorporates a new simplified rocker-bogie model for estimating wheel loads. In all but a single trial the relative energy requirements for each soil type were correctly predicted by the model.
A path planner for complete coverage intended to minimize energy consumption was designed and tested. It accepts as input terrain maps detailing the energy consumption required to move to each adjacent location. Exploration is performed via a cost function which determines the robot’s next move. This system was successfully tested for multiple robots by means of a shared exploration map. At peak efficiency, the energy consumed by our path planner was only 56% that used by the best case back and forth coverage pattern.
After performing a sensitivity analysis of Shibly’s equations to determine which soil parameters most affected energy consumption, a neural network terrain classifier was designed and tested. The terrain classifier defines all traversable terrain as one of three soil types and then assigns an assumed set of soil parameters. The classifier performed well over all, but had some difficulty distinguishing large rocks from sand.
This work presents a system which successfully classifies terrain imagery into one of three soil types, assesses the energy requirements of terrain traversal for these soil types and plans efficient paths of complete coverage for the imaged area. While there are further efforts that can be made in all areas, the work achieves its stated goals
Building Fuzzy Elevation Maps from a Ground-based 3D Laser Scan for Outdoor Mobile Robots
Mandow, A; Cantador, T.J.; Reina, A.J.; MartÃnez, J.L.; Morales, J.; GarcÃa-Cerezo, A. "Building Fuzzy Elevation Maps from a Ground-based 3D Laser Scan for Outdoor Mobile Robots," Robot2015: Second Iberian Robotics Conference, Advances in Robotics, (2016) Advances in Intelligent Systems and Computing, vol. 418. This is a self-archiving copy of the author’s accepted manuscript. The final publication is available at Springer via
http://link.springer.com/book/10.1007/978-3-319-27149-1.The paper addresses terrain modeling for mobile robots with fuzzy elevation maps by improving computational
speed and performance over previous work on fuzzy terrain identification from a three-dimensional (3D) scan. To this end,
spherical sub-sampling of the raw scan is proposed to select training data that does not filter out salient obstacles. Besides,
rule structure is systematically defined by considering triangular sets with an unevenly distributed standard fuzzy partition
and zero order Sugeno-type consequents. This structure, which favors a faster training time and reduces the number of rule
parameters, also serves to compute a fuzzy reliability mask for the continuous fuzzy surface. The paper offers a case study
using a Hokuyo-based 3D rangefinder to model terrain with and without outstanding obstacles. Performance regarding error
and model size is compared favorably with respect to a solution that uses quadric-based surface simplification (QSlim).This work was partially supported by the Spanish CICYT project DPI 2011-22443, the Andalusian project PE-2010 TEP-6101, and Universidad de Málaga-AndalucÃa Tech
Terrain Segmentation and Roughness Estimation using RGB Data: Path Planning Application on the CENTAURO Robot
Robots operating in real world environments require a high-level perceptual understanding of the chief physical properties of the terrain they are traversing. In unknown environments, roughness is one such important terrain property that could play a key role in devising robot control/planning strategies. In this paper, we present a fast method for predicting pixel-wise labels of terrain (stone, sand, road/sidewalk, wood, grass, metal) and roughness estimation, using a single RGB-based deep neural network. Real world RGB images are used to experimentally validate the presented approach. Furthermore, we demonstrate an application of our proposed method on the centaur-like wheeled-legged robot CENTAURO, by integrating it with a navigation planner that is capable of re-configuring the leg joints to modify the robot footprint polygon for stability purposes or for safe traversal among obstacles
Field Testing of a Stochastic Planner for ASV Navigation Using Satellite Images
We introduce a multi-sensor navigation system for autonomous surface vessels
(ASV) intended for water-quality monitoring in freshwater lakes. Our mission
planner uses satellite imagery as a prior map, formulating offline a
mission-level policy for global navigation of the ASV and enabling autonomous
online execution via local perception and local planning modules. A significant
challenge is posed by the inconsistencies in traversability estimation between
satellite images and real lakes, due to environmental effects such as wind,
aquatic vegetation, shallow waters, and fluctuating water levels. Hence, we
specifically modelled these traversability uncertainties as stochastic edges in
a graph and optimized for a mission-level policy that minimizes the expected
total travel distance. To execute the policy, we propose a modern local planner
architecture that processes sensor inputs and plans paths to execute the
high-level policy under uncertain traversability conditions. Our system was
tested on three km-scale missions on a Northern Ontario lake, demonstrating
that our GPS-, vision-, and sonar-enabled ASV system can effectively execute
the mission-level policy and disambiguate the traversability of stochastic
edges. Finally, we provide insights gained from practical field experience and
offer several future directions to enhance the overall reliability of ASV
navigation systems.Comment: 33 pages, 20 figures. Project website https://pcctp.github.io. arXiv
admin note: text overlap with arXiv:2209.1186
Sampling-Based Exploration Strategies for Mobile Robot Autonomy
A novel, sampling-based exploration strategy is introduced for Unmanned Ground Vehicles (UGV) to efficiently map large GPS-deprived underground environments. It is compared to state-of-the-art approaches and performs on a similar level, while it is not designed for a specific robot or sensor configuration like the other approaches. The introduced exploration strategy, which is called Random-Sampling-Based Next-Best View Exploration (RNE), uses a Rapidly-exploring Random Graph (RRG) to find possible view points in an area around the robot. They are compared with a computation-efficient Sparse Ray Polling (SRP) in a voxel grid to find the next-best view for the exploration. Each node in the exploration graph built with RRG is evaluated regarding the ability of the UGV to traverse it, which is derived from an occupancy grid map. It is also used to create a topology-based graph where nodes are placed centrally to reduce the risk of collisions and increase the amount of observable space. Nodes that fall outside the local exploration area are stored in a global graph and are connected with a Traveling Salesman Problem solver to explore them later
Behavior-Based Robot Navigation on Challenging Terrain: A Fuzzy Logic Approach
©2002 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.DOI: 10.1109/TRA.2002.1019461This paper presents a new strategy for behavior-based navigation of field mobile robots on challenging terrain, using a fuzzy logic approach and a novel measure of terrain traversability. A key feature of the proposed approach is real-time assessment of terrain characteristics and incorporation of this information in the robot navigation strategy. Three terrain characteristics that strongly affect its traversability, namely, roughness, slope, and discontinuity, are extracted from video images obtained by on-board cameras. This traversability data is used to infer, in real time, the terrain Fuzzy Rule-Based Traversability Index, which succinctly quantifies the ease of traversal of the regional terrain by the mobile robot. A new traverse-terrain behavior is introduced that uses the regional traversability index to guide the robot to the safest and the most traversable terrain region. The regional traverse-terrain behavior is complemented by two other behaviors, local avoid-obstacle and global seek-goal. The recommendations of these three behaviors are integrated through adjustable weighting factors to generate the final motion command for the robot. The weighting factors are adjusted automatically, based on the situational context of the robot. The terrain assessment and robot navigation algorithms Are implemented on a Pioneer commercial robot and field-test studies are conducted. These studies demonstrate that the robot possesses intelligent decision-making capabilities that are brought to bear in negotiating hazardous terrain conditions during the robot motion
Learning for Humanoid Multi-Contact Navigation Planning
Humanoids' abilities to navigate uneven terrain make them well-suited for disaster response efforts, but humanoid motion planning in unstructured environments remains a challenging problem. In this dissertation we focus on planning contact sequences for a humanoid robot navigating in large unstructured environments using multi-contact motion, including both foot and palm contacts. In particular, we address the two following questions: (1) How do we efficiently generate a feasible contact sequence? and (2) How do we efficiently generate contact sequences which lead to dynamically-robust motions?
For the first question, we propose a library-based method that retrieves motion plans from a library constructed offline, and adapts them with local trajectory optimization to generate the full motion plan from the start to the goal. This approach outperforms a conventional graph search contact planner when it is difficult to decide which contact is preferable with a simplified robot model and local environment information. We also propose a learning approach to estimate the difficulty to traverse a certain region based on the environment features. By integrating the two approaches, we propose a planning framework that uses graph search planner to find contact sequences around easy regions. When it is necessary to go through a difficult region, the framework switches to use the library-based method around the region to find a feasible contact sequence faster.
For the second question, we consider dynamic motions in contact planning. Most humanoid motion generators do not optimize the dynamic robustness of a contact sequence. By querying a learned model to predict the dynamic feasibility and robustness of each contact transition from a centroidal dynamics optimizer, the proposed planner efficiently finds contact sequences which lead to dynamically-robust motions. We also propose a learning-based footstep planner which takes external disturbances into account. The planner considers not only the poses of the planned contact sequence, but also alternative contacts near the planned contact sequence that can be used to recover from external disturbances. Neural networks are trained to efficiently predict multi-contact zero-step and one-step capturability, which allows the planner to generate contact sequences robust to external disturbances efficiently.PHDRoboticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162908/1/linyuchi_1.pd
Haptic robot-environment interaction for self-supervised learning in ground mobility
Dissertação para obtenção do Grau de Mestre em
Engenharia Eletrotécnica e de ComputadoresThis dissertation presents a system for haptic interaction and self-supervised learning mechanisms to ascertain navigation affordances from depth cues. A simple pan-tilt telescopic arm and a structured light sensor, both fitted to the robot’s body frame, provide the required haptic and depth sensory feedback. The system aims at incrementally develop the ability to assess the cost of navigating in natural environments. For this purpose the robot learns a mapping between the appearance
of objects, given sensory data provided by the sensor, and their bendability, perceived by the pan-tilt telescopic arm. The object descriptor, representing the object in memory and used for comparisons with other objects, is rich for a robust comparison and simple enough to allow for fast computations.
The output of the memory learning mechanism allied with the haptic interaction point evaluation prioritize interaction points to increase the confidence on the interaction and correctly identifying obstacles,
reducing the risk of the robot getting stuck or damaged. If the system concludes that the
object is traversable, the environment change detection system allows the robot to overcome it. A set of field trials show the ability of the robot to progressively learn which elements of environment are traversable
- …