3 research outputs found

    Technologies Enabling Exploration of Skylights, Lava Tubes and Caves

    Get PDF
    Robotic exploration of skylights and caves can seek out life, investigate geology and origins, and open the subsurface of other worlds to humankind. However, exploration of these features is a daunting venture. Planetary voids present perilous terrain that requires innovative technologies for access, exploration, and modeling. This research developed technologies for venturing underground and conceived mission architectures for robotic expeditions that explore skylights, lava tubes and caves. The investigation identified effective designs for mobile robot architecture to explore sub-planetary features. Results provide insight into mission architectures, skylight reconnaissance and modeling, robot configuration and operations, and subsurface sensing and modeling. These are developed as key enablers for robotic missions to explore planetary caves. These results are compiled to generate "Spelunker", a prototype mission concept to explore a lunar skylight and cave. The Spelunker mission specifies safe landing on the rim of a skylight, tethered descent of a power and communications hub, and autonomous cave exploration by hybrid driving/hopping robots. A technology roadmap was generated identifying the maturation path for enabling technologies for this and similar missions

    Position Estimation by Registration to Planetary Terrain*

    No full text
    Abstract — LIDAR-only and camera-only approaches to global localization in planetary environments have relied heavily on availability of elevation data. The low-resolution nature of available DEMs limits the accuracy of these methods. Availability of new high-resolution planetary imagery motivates the rover localization method presented here. The method correlates terrain appearance with orthographic imagery. A rover generates a colorized 3D model of the local terrain using a panorama of camera and LIDAR data. This model is orthographically projected onto the ground plane to create a template image. The template is then correlated with available satellite imagery to determine rover location. No prior elevation data is necessary. Experiments in simulation demonstrate 2m accuracy. This method is robust to 30° differences in lighting angle between satellite and rover imagery. I

    Position Estimation by Registration to Planetary Terrain

    No full text
    <p>LIDAR-only and camera-only approaches to global localization in planetary environments have relied heavily on availability of elevation data. The low-resolution nature of available DEMs limits the accuracy of these methods. Availability of new high-resolution planetary imagery motivates the rover localization method presented here. The method correlates terrain appearance with orthographic imagery. A rover generates a colorized 3D model of the local terrain using a panorama of camera and LIDAR data. This model is orthographically projected onto the ground plane to create a template image. The template is then correlated with available satellite imagery to determine rover location. No prior elevation data is necessary. Experiments in simulation demonstrate 2m accuracy. This method is robust to 30° differences in lighting angle between satellite and rover imagery.</p
    corecore