126 research outputs found

    Map building fusing acoustic and visual information using autonomous underwater vehicles

    Get PDF
    Author Posting. © The Author(s), 2012. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 30 (2013): 763–783, doi:10.1002/rob.21473.We present a system for automatically building 3-D maps of underwater terrain fusing visual data from a single camera with range data from multibeam sonar. The six-degree of freedom location of the camera relative to the navigation frame is derived as part of the mapping process, as are the attitude offsets of the multibeam head and the on-board velocity sensor. The system uses pose graph optimization and the square root information smoothing and mapping framework to simultaneously solve for the robot’s trajectory, the map, and the camera location in the robot’s frame. Matched visual features are treated within the pose graph as images of 3-D landmarks, while multibeam bathymetry submap matches are used to impose relative pose constraints linking robot poses from distinct tracklines of the dive trajectory. The navigation and mapping system presented works under a variety of deployment scenarios, on robots with diverse sensor suites. Results of using the system to map the structure and appearance of a section of coral reef are presented using data acquired by the Seabed autonomous underwater vehicle.The work described herein was funded by the National Science Foundation Censsis ERC under grant number EEC-9986821, and by the National Oceanic and Atmospheric Administration under grant number NA090AR4320129

    Autonomous underwater vehicle navigation and mapping in dynamic, unstructured environments

    Get PDF
    Thesis (Ph.D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 91-98).This thesis presents a system for automatically building 3-D optical and bathymetric maps of underwater terrain using autonomous robots. The maps that are built improve the state of the art in resolution by an order of magnitude, while fusing bathymetric information from acoustic ranging sensors with visual texture captured by cameras. As part of the mapping process, several internal relationships between sensors are automatically calibrated, including the roll and pitch offsets of the velocity sensor, the attitude offset of the multibeam acoustic ranging sensor, and the full six-degree of freedom offset of the camera. The system uses pose graph optimization to simultaneously solve for the robot's trajectory, the map, and the camera location in the robot's frame, and takes into account the case where the terrain being mapped is drifting and rotating by estimating the orientation of the terrain at each time step in the robot's trajectory. Relative pose constraints are introduced into the pose graph based on multibeam submap matching using depth image correlation, while landmark-based constraints are used in the graph where visual features are available. The two types of constraints work in concert in a single optimization, fusing information from both types of mapping sensors and yielding a texture-mapped 3-D mesh for visualization. The optimization framework also allows for the straightforward introduction of constraints provided by the particular suite of sensors available, so that the navigation and mapping system presented works under a variety of deployment scenarios, including the potential incorporation of external localization systems such as long-baseline acoustic networks. Results of using the system to map the draft of rotating Antarctic ice floes are presented, as are results fusing optical and range data of a coral reef.by Clayton Gregory Kunz.Ph.D

    The Human-Robot Interaction Operating System

    Get PDF
    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API

    Photo-realistic Terrain Modeling and Visualization for Mars Exploration Rover Science Operations

    Get PDF
    Modern NASA planetary exploration missions employ complex systems of hardware and software managed by large teams of. engineers and scientists in order to study remote environments. The most complex and successful of these recent projects is the Mars Exploration Rover mission. The Computational Sciences Division at NASA Ames Research Center delivered a 30 visualization program, Viz, to the MER mission that provides an immersive, interactive environment for science analysis of the remote planetary surface. In addition, Ames provided the Athena Science Team with high-quality terrain reconstructions generated with the Ames Stereo-pipeline. The on-site support team for these software systems responded to unanticipated opportunities to generate 30 terrain models during the primary MER mission. This paper describes Viz, the Stereo-pipeline, and the experiences of the on-site team supporting the scientists at JPL during the primary MER mission

    2D/3D Visual Tracker for Rover Mast

    Get PDF
    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion

    Could thermal fluctuations seed cosmic structure?

    Full text link
    We examine the possibility that thermal, rather than quantum, fluctuations are responsible for seeding the structure of our universe. We find that while the thermalization condition leads to nearly Gaussian statistics, a Harrisson-Zeldovich spectrum for the primordial fluctuations can only be achieved in very special circumstances. These depend on whether the universe gets hotter or colder in time, while the modes are leaving the horizon. In the latter case we find a no-go theorem which can only be avoided if the fundamental degrees of freedom are not particle-like, such as in string gases near the Hagedorn phase transition. The former case is less forbidding, and we suggest two potentially successful ``warming universe'' scenarios. One makes use of the Phoenix universe, the other of ``phantom'' matter.Comment: minor corrections made, references added, matches the version accepted to PR

    A Preliminary Study of Peer-to-Peer Human-Robot Interaction

    Get PDF
    The Peer-to-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our work is motivated by the need to develop effective human-robot teams for space mission operations. A central element of our approach is creating dialogue and interaction tools that enable humans and robots to flexibly support one another. In order to understand how this approach can influence task performance, we recently conducted a series of tests simulating a lunar construction task with a human-robot team. In this paper, we describe the tests performed, discuss our initial results, and analyze the effect of intervention on task performance

    Human-Robot Site Survey and Sampling for Space Exploration

    Get PDF
    NASA is planning to send humans and robots back to the Moon before 2020. In order for extended missions to be productive, high quality maps of lunar terrain and resources are required. Although orbital images can provide much information, many features (local topography, resources, etc) will have to be characterized directly on the surface. To address this need, we are developing a system to perform site survey and sampling. The system includes multiple robots and humans operating in a variety of team configurations, coordinated via peer-to-peer human-robot interaction. In this paper, we present our system design and describe planned field tests

    Stellar Nucleosynthesis in the Hyades Open Cluster

    Get PDF
    We report a comprehensive light element (Li, C, N, O, Na, Mg, and Al) abundance analysis of three solar-type main sequence (MS) dwarfs and three red giant branch (RGB) clump stars in the Hyades open cluster using high-resolution and high signal-to-noise spectroscopy. For each group (MS or RGB), the CNO abundances are found to be in excellent star-to-star agreement. Our results confirm that the giants have undergone the first dredge-up and that material processed by the CN cycle has been mixed to the surface layers. The observed abundances are compared to predictions of a standard stellar model based on the Clemson-American University of Beirut (CAUB) stellar evolution code. The model reproduces the observed evolution of the N and O abundances, as well as the previously derived 12C/13C ratio, but it fails to predict by a factor of 1.5 the observed level of 12C depletion. Li abundances are derived to determine if non-canonical extra mixing has occurred in the Hyades giants. The Li abundance of the giant gamma Tau is in good accord with the predicted level of surface Li dilution, but a ~0.35 dex spread in the giant Li abundances is found and cannot be explained by the stellar model. Possible sources of the spread are discussed; however, it is apparent that the differential mechanism responsible for the Li dispersion must be unrelated to the uniformly low 12C abundances of the giants. Na, Mg, and Al abundances are derived as an additional test of our stellar model. All three elements are found to be overabundant by 0.2-0.5 dex in the giants relative to the dwarfs. Such large enhancements of these elements are not predicted by the stellar model, and non-LTE effects significantly larger (and, in some cases, of opposite sign) than those implied by extant literature calculations are the most likely cause.Comment: 40 pages, 6 figures, 6 tables; accepted by Ap

    Deep Sea Underwater Robotic Exploration in the Ice-Covered Arctic Ocean with AUVs

    Get PDF
    The Arctic seafloor remains one of the last unexplored areas on Earth. Exploration of this unique environment using standard remotely operated oceanographic tools has been obstructed by the dense Arctic ice cover. In the summer of 2007 the Arctic Gakkel Vents Expedition (AGAVE) was conducted with the express intention of understanding aspects of the marine biology, chemistry and geology associated with hydrothermal venting on the section of the mid-ocean ridge known as the Gakkel Ridge. Unlike previous research expeditions to the Arctic the focus was on high resolution imaging and sampling of the deep seafloor. To accomplish our goals we designed two new Autonomous Underwater Vehicles (AUVs) named Jaguar and Puma, which performed a total of nine dives at depths of up to 4062m. These AUVs were used in combination with a towed vehicle and a conventional CTD (conductivity, temperature and depth) program to characterize the seafloor. This paper describes the design decisions and operational changes required to ensure useful service, and facilitate deployment, operation, and recovery in the unique Arctic environment.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86060/1/ckunz-17.pd
    • …
    corecore