2,225 research outputs found

    An Overview of AUV Algorithms Research and Testbed at the University of Michigan

    Full text link
    This paper provides a general overview of the autonomous underwater vehicle (AUV) research projects being pursued within the Perceptual Robotics Laboratory (PeRL) at the University of Michigan. Founded in 2007, PeRL's research thrust is centered around improving AUV autonomy via algorithmic advancements in sensor-driven perceptual feedback for environmentally-based real-time mapping, navigation, and control. In this paper we discuss our three major research areas of: (1) real-time visual simultaneous localization and mapping (SLAM); (2) cooperative multi-vehicle navigation; and (3) perception-driven control. Pursuant to these research objectives, PeRL has acquired and significantly modified two commercial off-the-shelf (COTS) Ocean-Server Technology, Inc. Iver2 AUV platforms to serve as a real-world engineering testbed for algorithm development and validation. Details of the design modification, and related research enabled by this integration effort, are discussed herein.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86058/1/reustice-15.pd

    Structured Light-Based Hazard Detection For Planetary Surface Navigation

    Get PDF
    This paper describes a structured light-based sensor for hazard avoidance in planetary environments. The system presented here can also be used in terrestrial applications constrained by reduced onboard power and computational complexity and low illumination conditions. The sensor is on a calibrated camera and laser dot projector system. The onboard hazard avoidance system determines the position of the projected dots in the image and through a triangulation process detects potential hazards. The paper presents the design parameters for this sensor and describes the image based solution for hazard avoidance. The system presented here was tested extensively in day and night conditions in Lunar analogue environments. The current system achieves over 97 detection rate with 1.7 false alarms over 2000 images

    Planetary Rover Simulation for Lunar Exploration Missions

    Get PDF
    When planning planetary rover missions it is useful to develop intuition and skills driving in, quite literally, alien environments before incurring the cost of reaching said locales. Simulators make it possible to operate in environments that have the physical characteristics of target locations without the expense and overhead of extensive physical tests. To that end, NASA Ames and Open Robotics collaborated on a Lunar rover driving simulator based on the open source Gazebo simulation platform and leveraging ROS (Robotic Operating System) components. The simulator was integrated with research and mission software for rover driving, system monitoring, and science instrument simulation to constitute an end-to-end Lunar mission simulation capability. Although we expect our simulator to be applicable to arbitrary Lunar regions, we designed to a reference mission of prospecting in polar regions. The harsh lighting and low illumination angles at the Lunar poles combine with the unique reflectance properties of Lunar regolith to present a challenging visual environment for both human and computer perception. Our simulator placed an emphasis on high fidelity visual simulation in order to produce synthetic imagery suitable for evaluating human rover drivers with navigation tasks, as well as providing test data for computer vision software development.In this paper, we describe the software used to construct the simulated Lunar environment and the components of the driving simulation. Our synthetic terrain generation software artificially increases the resolution of Lunar digital elevation maps by fractal synthesis and inserts craters and rocks based on Lunar size-frequency distribution models. We describe the necessary enhancements to import large scale, high resolution terrains into Gazebo, as well as our approach to modeling the visual environment of the Lunar surface. An overview of the mission software system is provided, along with how ROS was used to emulate flight software components that had not been developed yet. Finally, we discuss the effect of using the high-fidelity synthetic Lunar images for visual odometry. We also characterize the wheel slip model, and find some inconsistencies in the produced wheel slip behaviour

    A Summary of Neural Radiance Fields for Shadow Removal and Relighting of Satellite Imagery

    Get PDF
    Multi-view stereo photogrammetric techniques are conventionally utilized to generate Global Digital Elevation Models (GDEM) of planetary and lunar surfaces. However, these methods, relying on conventional feature detectors, are often subject to inaccuracies caused by changes in lighting conditions, including diffuse reflection and harsh shading. This has limited the ability of these methods to accurately reconstruct shadowed regions in orbital imagery, such as highly shaded urban areas and the permanently shadowed regions (PSRs) located on the lunar surface, which are critical targets for NASA’s Artemis program. Neural Radiance Fields (NeRFs) offer a novel solution to these limitations by breaking away from traditional photogrammetric assumptions of ridged, opaque surfaces. NeRFs are capable of reconstructing 3D objects with variably transmissive properties and reflective surfaces. In this summary analysis, we articulate the robustness of NeRFs in generating high-fidelity 3D models of terrain from highly shaded orbital imagery acquired from satellites in low earth orbit (LEO) and emphasize their applicability to a lunar environment. We showcase emerging NeRF-derived methods that overcome the limitations of traditional photogrammetric methods and provide a promising solution for reconstructing complex scenes in challenging lighting conditions

    A Stereo Vision Framework for 3-D Underwater Mosaicking

    Get PDF

    The Mars 2020 Perseverance Rover Mast Camera Zoom (Mastcam-Z) Multispectral, Stereoscopic Imaging Investigation

    Get PDF
    Mastcam-Z is a multispectral, stereoscopic imaging investigation on the Mars 2020 mission's Perseverance rover. Mastcam-Z consists of a pair of focusable, 4:1 zoomable cameras that provide broadband red/green/blue and narrowband 400-1000 nm color imaging with fields of view from 25.6 degrees x19.2 degrees (26 mm focal length at 283 mu rad/pixel) to 6.2 degrees x4.6 degrees (110 mm focal length at 67.4 mu rad/pixel). The cameras can resolve (>= 5 pixels) similar to 0.7 mm features at 2 m and similar to 3.3 cm features at 100 m distance. Mastcam-Z shares significant heritage with the Mastcam instruments on the Mars Science Laboratory Curiosity rover. Each Mastcam-Z camera consists of zoom, focus, and filter wheel mechanisms and a 1648x1214 pixel charge-coupled device detector and electronics. The two Mastcam-Z cameras are mounted with a 24.4 cm stereo baseline and 2.3 degrees total toe-in on a camera plate similar to 2 m above the surface on the rover's Remote Sensing Mast, which provides azimuth and elevation actuation. A separate digital electronics assembly inside the rover provides power, data processing and storage, and the interface to the rover computer. Primary and secondary Mastcam-Z calibration targets mounted on the rover top deck enable tactical reflectance calibration. Mastcam-Z multispectral, stereo, and panoramic images will be used to provide detailed morphology, topography, and geologic context along the rover's traverse; constrain mineralogic, photometric, and physical properties of surface materials; monitor and characterize atmospheric and astronomical phenomena; and document the rover's sample extraction and caching locations. Mastcam-Z images will also provide key engineering information to support sample selection and other rover driving and tool/instrument operations decisions

    Mars 2020 Perseverance Rover Mast Camera Zoom (Mastcam-Z) Multispectral, Stereoscopic Imaging Investigation

    Get PDF
    Mastcam-Z is a multispectral, stereoscopic imaging investigation on the Mars 2020 mission’s Perseverance rover. Mastcam-Z consists of a pair of focusable, 4:1 zoomable cameras that provide broadband red/green/blue and narrowband 400-1000 nm color imaging with fields of view from 25.6° × 19.2° (26 mm focal length at 283 μrad/pixel) to 6.2° × 4.6° (110 mm focal length at 67.4 μrad/pixel). The cameras can resolve (≥ 5 pixels) ∼0.7 mm features at 2 m and ∼3.3 cm features at 100 m distance. Mastcam-Z shares significant heritage with the Mastcam instruments on the Mars Science Laboratory Curiosity rover. Each Mastcam-Z camera consists of zoom, focus, and filter wheel mechanisms and a 1648 × 1214 pixel charge-coupled device detector and electronics. The two Mastcam-Z cameras are mounted with a 24.4 cm stereo baseline and 2.3° total toe-in on a camera plate ∼2 m above the surface on the rover’s Remote Sensing Mast, which provides azimuth and elevation actuation. A separate digital electronics assembly inside the rover provides power, data processing and storage, and the interface to the rover computer. Primary and secondary Mastcam-Z calibration targets mounted on the rover top deck enable tactical reflectance calibration. Mastcam-Z multispectral, stereo, and panoramic images will be used to provide detailed morphology, topography, and geologic context along the rover’s traverse; constrain mineralogic, photometric, and physical properties of surface materials; monitor and characterize atmospheric and astronomical phenomena; and document the rover’s sample extraction and caching locations. Mastcam-Z images will also provide key engineering information to support sample selection and other rover driving and tool/instrument operations decisions
    • …
    corecore