3 research outputs found

    Computing fast search heuristics for physics-based mobile robot motion planning

    Get PDF
    Mobile robots are increasingly being employed to assist responders in search and rescue missions. Robots have to navigate in dangerous areas such as collapsed buildings and hazardous sites, which can be inaccessible to humans. Tele-operating the robots can be stressing for the human operators, which are also overloaded with mission tasks and coordination overhead, so it is important to provide the robot with some degree of autonomy, to lighten up the task for the human operator and also to ensure robot safety. Moving robots around requires reasoning, including interpretation of the environment, spatial reasoning, planning of actions (motion), and execution. This is particularly challenging when the environment is unstructured, and the terrain is \textit{harsh}, i.e. not flat and cluttered with obstacles. Approaches reducing the problem to a 2D path planning problem fall short, and many of those who reason about the problem in 3D don't do it in a complete and exhaustive manner. The approach proposed in this thesis is to use rigid body simulation to obtain a more truthful model of the reality, i.e. of the interaction between the robot and the environment. Such a simulation obeys the laws of physics, takes into account the geometry of the environment, the geometry of the robot, and any dynamic constraints that may be in place. The physics-based motion planning approach by itself is also highly intractable due to the computational load required to perform state propagation combined with the exponential blowup of planning; additionally, there are more technical limitations that disallow us to use things such as state sampling or state steering, which are known to be effective in solving the problem in simpler domains. The proposed solution to this problem is to compute heuristics that can bias the search towards the goal, so as to quickly converge towards the solution. With such a model, the search space is a rich space, which can only contain states which are physically reachable by the robot, and also tells us enough information about the safety of the robot itself. The overall result is that by using this framework the robot engineer has a simpler job of encoding the \textit{domain knowledge} which now consists only of providing the robot geometric model plus any constraints

    Learning to visually predict terrain properties for planetary rovers

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 174-180).For future planetary exploration missions, improvements in autonomous rover mobility have the potential to increase scientific data return by providing safe access to geologically interesting sites that lie in rugged terrain, far from landing areas. This thesis presents an algorithmic framework designed to improve rover-based terrain sensing, a critical component of any autonomous mobility system operating in rough terrain. Specifically, this thesis addresses the problem of predicting the mechanical properties of distant terrain. A self-supervised learning framework is proposed that enables a robotic system to learn predictions of mechanical properties of distant terrain, based on measurements of mechanical properties of similar terrain that has been previously traversed. The proposed framework relies on three distinct algorithms. A mechanical terrain characterization algorithm is proposed that computes upper and lower bounds on the net traction force available at a patch of terrain, via a constrained optimization framework. Both model-based and sensor-based constraints are employed. A terrain classification method is proposed that exploits features from proprioceptive sensor data, and employs either a supervised support vector machine (SVM) or unsupervised k-means classifier to assign class labels to terrain patches that the rover has traversed. A second terrain classification method is proposed that exploits features from exteroceptive sensor data (e.g. color and texture), and is automatically trained in a self-supervised manner, based on the outputs of the proprioceptive terrain classifier.(cont.) The algorithm includes a method for distinguishing novel terrain from previously observed terrain. The outputs of these three algorithms are merged to yield a map of the surrounding terrain that is annotated with the expected achievable net traction force. Such a map would be useful for path planning purposes. The algorithms proposed in this thesis have been experimentally validated in an outdoor, Mars-analog environment. The proprioceptive terrain classifier demonstrated 92% accuracy in labeling three distinct terrain classes. The exteroceptive terrain classifier that relies on self-supervised training was shown to be approximately as accurate as a similar, human-supervised classifier, with both achieving 94% correct classification rates on identical data sets. The algorithm for detection of novel terrain demonstrated 89% accuracy in detecting novel terrain in this same environment. In laboratory tests, the mechanical terrain characterization algorithm predicted the lower bound of the net available traction force with an average margin of 21% of the wheel load.by Christopher A. Brooks.Ph.D
    corecore