33,376 research outputs found

    Deep-sea image processing

    Get PDF
    High-resolution seafloor mapping often requires optical methods of sensing, to confirm interpretations made from sonar data. Optical digital imagery of seafloor sites can now provide very high resolution and also provides additional cues, such as color information for sediments, biota and divers rock types. During the cruise AT11-7 of the Woods Hole Oceanographic Institution (WHOI) vessel R/V Atlantis (February 2004, East Pacific Rise) visual imagery was acquired from three sources: (1) a digital still down-looking camera mounted on the submersible Alvin, (2) observer-operated 1-and 3-chip video cameras with tilt and pan capabilities mounted on the front of Alvin, and (3) a digital still camera on the WHOI TowCam (Fornari, 2003). Imagery from the first source collected on a previous cruise (AT7-13) to the Galapagos Rift at 86°W was successfully processed and mosaicked post-cruise, resulting in a single image covering area of about 2000 sq.m, with the resolution of 3 mm per pixel (Rzhanov et al., 2003). This paper addresses the issues of the optimal acquisition of visual imagery in deep-seaconditions, and requirements for on-board processing. Shipboard processing of digital imagery allows for reviewing collected imagery immediately after the dive, evaluating its importance and optimizing acquisition parameters, and augmenting acquisition of data over specific sites on subsequent dives.Images from the deepsea power and light (DSPL) digital camera offer the best resolution (3.3 Mega pixels) and are taken at an interval of 10 seconds (determined by the strobe\u27s recharge rate). This makes images suitable for mosaicking only when Alvin moves slowly (≪1/4 kt), which is not always possible for time-critical missions. Video cameras provided a source of imagery more suitable for mosaicking, despite its inferiority in resolution. We discuss required pre-processing and imageenhancement techniques and their influence on the interpretation of mosaic content. An algorithm for determination of camera tilt parameters from acquired imagery is proposed and robustness conditions are discussed

    A new 3-D modelling method to extract subtransect dimensions from underwater videos

    Get PDF
    Underwater video transects have become a common tool for quantitative analysis of the seafloor. However a major difficulty remains in the accurate determination of the area surveyed as underwater navigation can be unreliable and image scaling does not always compensate for distortions due to perspective and topography. Depending on the camera set-up and available instruments, different methods of surface measurement are applied, which make it difficult to compare data obtained by different vehicles. 3-D modelling of the seafloor based on 2-D video data and a reference scale can be used to compute subtransect dimensions. Focussing on the length of the subtransect, the data obtained from 3-D models created with the software PhotoModeler Scanner are compared with those determined from underwater acoustic positioning (ultra short baseline, USBL) and bottom tracking (Doppler velocity log, DVL). 3-D model building and scaling was successfully conducted on all three tested set-ups and the distortion of the reference scales due to substrate roughness was identified as the main source of imprecision. Acoustic positioning was generally inaccurate and bottom tracking unreliable on rough terrain. Subtransect lengths assessed with PhotoModeler were on average 20 % longer than those derived from acoustic positioning due to the higher spatial resolution and the inclusion of slope. On a high relief wall bottom tracking and 3-D modelling yielded similar results. At present, 3-D modelling is the most powerful, albeit the most time-consuming, method for accurate determination of video subtransect dimensions

    Operator vision aids for space teleoperation assembly and servicing

    Get PDF
    This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed

    Semi-analytical guidance algorithm for autonomous close approach to non-cooperative low-gravity targets

    Get PDF
    An adaptive guidance algorithm for close approach to and precision landing on uncooperative low-gravity objects (e.g. asteroids) is proposed. The trajectory, updated by means of a minimum fuel optimal control problem solving, is expressed in a polynomial form of minimum order to satisfy a set of boundary constraints from initial and final states and attitude requirements. Optimal guidance computation, achieved with a simple two-stage compass search, is reduced to the determination of three parameters, time-of-flight, initial thrust magnitude and initial thrust angle, according to additional constraints due to actual spacecraft architecture. A NEA landing mission case is analyzed

    Microarcsecond astrometry with Gaia: the solar system, the Galaxy and beyond

    Full text link
    Gaia is an all sky, high precision astrometric and photometric satellite of the European Space Agency (ESA) due for launch in 2010-2011. Its primary mission is to study the composition, formation and evolution of our Galaxy. Gaia will measure parallaxes and proper motions of every object in the sky brighter than V=20, amounting to a billion stars, galaxies, quasars and solar system objects. It will achieve an astrometric accuracy of 10muas at V=15 - corresponding to a distance accuracy of 1% at 1kpc. With Gaia, tens of millions of stars will have their distances measured to a few percent or better. This is an improvement over Hipparcos by several orders of magnitude in the number of objects, accuracy and limiting magnitude. Gaia will also measure radial velocities for source brighter than V~17. To characterize the objects, each object is observed in 15 medium and broad photometric bands with an onboard CCD camera. With these capabilities, Gaia will make significant advances in a wide range of astrophysical topics. These include a detailed kinematical map of stellar populations, stellar structure and evolution, the discovery and characterization of thousands of exoplanetary systems and General Relativity on large scales. I give an overview of the mission, its operating principles and its expected scientific contributions. For the latter I provide a quick look in five areas on increasing scale size in the universe: the solar system, exosolar planets, stellar clusters and associations, Galactic structure and extragalactic astronomy.Comment: (Errors corrected) Invited paper at IAU Colloquium 196, "Transit of Venus: New Views of the Solar System and Galaxy". 14 pages, 6 figures. Version with higher resolution figures available from http://www.mpia-hd.mpg.de/homes/calj/gaia_venus2004.htm
    corecore