3 research outputs found

    3-D Vision Techniques for Autonomous Vehicles

    No full text
    A mobile robot needs an internal representation of its environment in order to accomplish its mission. Building such a representation involves transforming raw data from sensors into a meaningful geometric representation. In this paper, we introduce techniques for building terrain representations from range data for an outdoor mobile robot. We introduce three levels of representations that correspond to levels of planning: obstacle maps, terrain patches, and high resolution elevation maps. Since terrain representations from individual locations are not sufficient for many navigation tasks, we also introduce techniques for combining multiple maps. Combining maps may be achieved either by using features or the raw elevation data. Finally, we introduce algorithms for combining 3-D descriptions with descriptions from other sensors, such as color cameras. We examine the need for this type of sensor fusion when some semantic information has to be extracted from an observed scene and provide an example application of outdoor scene analysis. Many of the techniques presented in this paper have been tested in the field on three mobile robot systems developed at CMU

    First Results in Terrain Mapping for a Roving Planetary Explorer

    No full text
    To perform planetary exploration without human supervision, a complete autonomous rover must be able to model its environment while exploring its surroundings. We present a new algorithm to construct a geometric terrain representation from a single range image. The form of the representation is an elevation map that includes uncertainty, unknown areas, and local features. By virtue of working in spherical-polar space, the algorithm is independent of the desired map resolution and the orientation of the sensor, unlike other algorithms that work in Cartesian space. We also describe new methods to evaluate regions of the constructed elevation maps to support legged locomotion over rough terrain

    Terrain Mapping for a Roving Planetary Explorer

    No full text
    The main task of perception for autonomous vehicles is to build a representation of the observed environment in order to carry out a mission. In particular, terrain modeling, that is modeling the geometry of the environment observed by the vehicle's semors, is crucial for autonomous underwater exploration. The purpose of this work is to analyze the components of the terrain modeling task, to investigate the algorithms and representations for this task, and to evaluate them in the context of real applications. Terrain representation is an issue that is of interest in many areas of mobile robotics, such as land vehicles, planetary explorers, etc. This paper surveys some of the ideas developed in those areas and their relevance to the underwater navigation problem. Terrain modeling is divided into three parts: structuring sensor data, extracting features, and merging and updating terrain models
    corecore