1,493 research outputs found

    Navigational Drift Analysis for Visual Odometry

    Get PDF
    Visual odometry estimates a robot's ego-motion with cameras installed on itself. With the advantages brought by camera being a sensor, visual odometry has been widely adopted in robotics and navigation fields. Drift (or error accumulation) from relative motion concatenation is an intrinsic problem of visual odometry in long-range navigation, as visual odometry is a sensor based on relative measurements. General error analysis using ``mean'' and ``covariance'' of positional error in each axis is not fully capable to describe the behavior of drift. Moreover, no theoretic drift analysis is available for performance evaluation and algorithms comparison. Drift distribution is established in the paper, as a function of the covariance matrix from positional error propagation model. To validate the drift model, experiment with a specific setting is conducted

    Guidance for benthic habitat mapping: an aerial photographic approach

    Get PDF
    This document, Guidance for Benthic Habitat Mapping: An Aerial Photographic Approach, describes proven technology that can be applied in an operational manner by state-level scientists and resource managers. This information is based on the experience gained by NOAA Coastal Services Center staff and state-level cooperators in the production of a series of benthic habitat data sets in Delaware, Florida, Maine, Massachusetts, New York, Rhode Island, the Virgin Islands, and Washington, as well as during Center-sponsored workshops on coral remote sensing and seagrass and aquatic habitat assessment. (PDF contains 39 pages) The original benthic habitat document, NOAA Coastal Change Analysis Program (C-CAP): Guidance for Regional Implementation (Dobson et al.), was published by the Department of Commerce in 1995. That document summarized procedures that were to be used by scientists throughout the United States to develop consistent and reliable coastal land cover and benthic habitat information. Advances in technology and new methodologies for generating these data created the need for this updated report, which builds upon the foundation of its predecessor

    Doing Fieldwork on the Seafloor: Photogrammetric Techniques to yield 3D Visual Models from ROV Video

    Get PDF
    Remotely Operated Vehicles (ROVs) have proven to be highly effective in recovering well localized samples and observations from the seafloor. In the course of ROV deployments, however, huge amounts of video and photographic data are gathered which present tremendous potential for data mining. We present a new workflow based on industrial software to derive fundamental field geology information such as quantitative stratigraphy and tectonic structures from ROV-based photo and video material. We demonstrate proof of principle tests for this workflow on video data collected during dives with the ROV Kiel6000 on a new hot spot volcanic field that was recently identified southwest of the island of Santo Antão in the Cape Verdes. Our workflow allows us to derive three-dimensional models of outcrops facilitating quantitative measurements of joint orientation, bedding structure, grain size comparison and photo mosaicking within a georeferenced framework. The compiled data facilitate volcanological and tectonic interpretations from hand specimen to outcrop scales based on the quantified optical data. The demonstrated procedure is readily replicable and opens up possibilities for post-cruise “virtual fieldwork” on the seafloor

    Airborne vision-based attitude estimation and localisation

    Get PDF
    Vision plays an integral part in a pilot's ability to navigate and control an aircraft. Therefore Visual Flight Rules have been developed around the pilot's ability to see the environment outside of the cockpit in order to control the attitude of the aircraft, to navigate and to avoid obstacles. The automation of these processes using a vision system could greatly increase the reliability and autonomy of unmanned aircraft and flight automation systems. This thesis investigates the development and implementation of a robust vision system which fuses inertial information with visual information in a probabilistic framework with the aim of aircraft navigation. The horizon appearance is a strong visual indicator of the attitude of the aircraft. This leads to the first research area of this thesis, visual horizon attitude determination. An image processing method was developed to provide high performance horizon detection and extraction from camera imagery. A number of horizon models were developed to link the detected horizon to the attitude of the aircraft with varying degrees of accuracy. The second area investigated in this thesis was visual localisation of the aircraft. A terrain-aided horizon model was developed to estimate the position, altitude as well as attitude of the aircraft. This gives rough positions estimates with highly accurate attitude information. The visual localisation accuracy was improved by incorporating ground feature-based map-aided navigation. Road intersections were detected using a developed image processing algorithm and then they were matched to a database to provide positional information. The developed vision system show comparable performance to other non-vision-based systems while removing the dependence on external systems for navigation. The vision system and techniques developed in this thesis helps to increase the autonomy of unmanned aircraft and flight automation systems for manned flight

    Situated interaction on spatial topics

    Get PDF
    In this thesis, we present a model and an implementation to handle situational interactions on spatial topics as well as several adaptation strategies to cope with common problems in real-world applications. The model is designed to incorporate situational factors in spatial reasoning processes at the basic level and to facilitate its use in a wide range of applications. The implementation realizing the model corresponds very closely to the structure of the model, and was put to test in a scenario of a mobile tourist guide. The adaptation strategies address the lack of information, resource restrictions as well as the problem of varying availability and quality of positional information.In dieser Arbeit stellen wir ein Modell zur Verarbeitung situierter Interaktionen über raumbezogene Sachverhalte und seine Implementation vor. Außerdem präsentieren wir verschiedene Strategien zum Umgang mit häufigen Problemen, die im Zusammenhang mit dem (mobilen) Einsatz von Systemen im realen Umfeld auftreten. Das zu Grunde liegende Modell bezieht situationsbezogene Faktoren auf unterster Ebene mit ein und erleichtert durch den modularen Aufbau seinen Einsatz im Rahmen verschiedener Anwendungen. Die entsprechende Implementation spiegelt die Struktur des Modells wider und wurde im Rahmen eines mobilen Touristenführers getestet. Die ebenfalls vorgestellten Adaptionsstrategien dienen unter anderem zur Behandlung von Informationsmangel und von Ressourcenbeschränkungen sowie zum Umgang mit dem Problem variierender Verfügbarkeit und Qualität von Positionsinformation

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability

    Gaze behaviour and brain activation patterns during real-space navigation in hippocampal dysfunction

    Get PDF

    The Behavioral Relevance of Landmark Texture for Honeybee Homing

    Get PDF
    Honeybees visually pinpoint the location of a food source using landmarks. Studies on the role of visual memories have suggested that bees approach the goal by finding a close match between their current view and a memorized view of the goal location. The most relevant landmark features for this matching process seem to be their retinal positions, the size as defined by their edges, and their color. Recently, we showed that honeybees can use landmarks that are statically camouflaged, suggesting that motion cues are relevant as well. Currently it is unclear how bees weight these different landmark features when accomplishing navigational tasks, and whether this depends on their saliency. Since natural objects are often distinguished by their texture, we investigate the behavioral relevance and the interplay of the spatial configuration and the texture of landmarks. We show that landmark texture is a feature that bees memorize, and being given the opportunity to identify landmarks by their texture improves the bees’ navigational performance. Landmark texture is weighted more strongly than landmark configuration when it provides the bees with positional information and when the texture is salient. In the vicinity of the landmark honeybees changed their flight behavior according to its texture
    • …
    corecore