8,458 research outputs found

    Environmental modeling and recognition for an autonomous land vehicle

    Get PDF
    An architecture for object modeling and recognition for an autonomous land vehicle is presented. Examples of objects of interest include terrain features, fields, roads, horizon features, trees, etc. The architecture is organized around a set of data bases for generic object models and perceptual structures, temporary memory for the instantiation of object and relational hypotheses, and a long term memory for storing stable hypotheses that are affixed to the terrain representation. Multiple inference processes operate over these databases. Researchers describe these particular components: the perceptual structure database, the grouping processes that operate over this, schemas, and the long term terrain database. A processing example that matches predictions from the long term terrain model to imagery, extracts significant perceptual structures for consideration as potential landmarks, and extracts a relational structure to update the long term terrain database is given

    Selection and Recognition of Landmarks Using Terrain Spatiograms

    Get PDF
    A team of robots working to explore and map an area may need to share information about landmarks so as to register their local maps and to plan effective exploration strategies. In previous papers we have introduced a combined image and spatial representation for landmarks: terrain spatiograms. We have shown that for manually selected views, terrain spatiograms provide an effective, shared representation that allows for occlusion filtering and a combination of multiple views. In this paper, we present a landmark saliency architecture (LSA) for automatically selecting candidate landmarks. Using a dataset of 21 outdoor stereo images generated by LSA, we show that the terrain spatiogram representation reliably recognizes automatically selected landmarks. The terrain spatiogram results are shown to improve on two purely appearance based approaches: template matching and image histogram matching

    Augmented UAS navigation in GPS denied terrain environments using synthetic vision

    Get PDF
    GPS is a critical sensor for Unmanned Aircraft Systems (UASs) navigation due to its accu racy, global coverage, and small hardware footprint. However, GPS is subject to interruption or denial due to signal blockage or RF interference. In such a case, position, velocity and altitude (PVA) performance from other inertial and air data sensor is not sufficient for UAS platforms to continue their primary missions, especially for small UASs. Recently, image-based navigation has been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. This thesis develops a novel, automated UAS navigation augmentation scheme, which utilizes publicly available open source geo-referenced vector map data, in conjunction with real-time optical imagery from on-board monocular camera to augment UAS navigation in GPS denied terrain environments. The main idea is to analyze and use terrain drainage patterns for GPS-denied navigation of small UASs, such as ScanEagle, utilizing a down-looking fixed monocular imager. We leverage the analogy between terrain drainage patterns and human fingerprints, to match local drainage patterns to GPU (Graphics Processing Unit) rendered parallax occlusion maps of geo-registered radar returns (GRRR). The matching occurs in real-time. GRRR is assumed to be loaded on-board the aircraft pre-mission, so as not to require a scanning aperture radar during the mission. Once a successful match is made, using a known lens model a final PVA solution can be obtained from the extrinsic matrix of the camera [1]. Our approach allows extension of UAS missions to GPS denied terrain areas, with no assumption of human-made geographic objects. We study the influence of granularity of terrain drainage patterns on performance of our minutiae-based terrain matching approach. Based on experimental observations, we conclude that our approach delivers a satisfactory performance. We identify the conditions to achieve the desired performance for the input images based on UAS flight altitudes

    Vision-Based Terrain Relative Navigation on High-Altitude Balloon and Sub-Orbital Rocket

    Full text link
    We present an experimental analysis on the use of a camera-based approach for high-altitude navigation by associating mapped landmarks from a satellite image database to camera images, and by leveraging inertial sensors between camera frames. We evaluate performance of both a sideways-tilted and downward-facing camera on data collected from a World View Enterprises high-altitude balloon with data beginning at an altitude of 33 km and descending to near ground level (4.5 km) with 1.5 hours of flight time. We demonstrate less than 290 meters of average position error over a trajectory of more than 150 kilometers. In addition to showing performance across a range of altitudes, we also demonstrate the robustness of the Terrain Relative Navigation (TRN) method to rapid rotations of the balloon, in some cases exceeding 20 degrees per second, and to camera obstructions caused by both cloud coverage and cords swaying underneath the balloon. Additionally, we evaluate performance on data collected by two cameras inside the capsule of Blue Origin's New Shepard rocket on payload flight NS-23, traveling at speeds up to 880 km/hr, and demonstrate less than 55 meters of average position error.Comment: Published in 2023 AIAA SciTec

    A novel visualisation paradigm for three-dimensional map-based mobile services

    Get PDF
    Estágio realizado na NDrive Navigation Systems, S. A.Tese de mestrado integrado. Engenharia Informátca e Computação. Faculdade de Engenharia. Universidade do Porto. 200

    ERTS imagery as data source for updating aeronautical charts

    Get PDF
    There are no author-identified significant results in this report

    An approach for real world data modelling with the 3D terrestrial laser scanner for built environment

    Get PDF
    Capturing and modelling 3D information of the built environment is a big challenge. A number of techniques and technologies are now in use. These include EDM, GPS, and photogrammetric application, remote sensing and traditional building surveying applications. However, use of these technologies cannot be practical and efficient in regard to time, cost and accuracy. Furthermore, a multi disciplinary knowledge base, created from the studies and research about the regeneration aspects is fundamental: historical, architectural, archeologically, environmental, social, economic, etc. In order to have an adequate diagnosis of regeneration, it is necessary to describe buildings and surroundings by means of documentation and plans. However, at this point in time the foregoing is considerably far removed from the real situation, since more often than not it is extremely difficult to obtain full documentation and cartography, of an acceptable quality, since the material, constructive pathologies and systems are often insufficient or deficient (flat that simply reflects levels, isolated photographs,..). Sometimes the information in reality exists, but this fact is not known, or it is not easily accessible, leading to the unnecessary duplication of efforts and resources. In this paper, we discussed 3D laser scanning technology, which can acquire high density point data in an accurate, fast way. Besides, the scanner can digitize all the 3D information concerned with a real world object such as buildings, trees and terrain down to millimetre detail Therefore, it can provide benefits for refurbishment process in regeneration in the Built Environment and it can be the potential solution to overcome the challenges above. The paper introduce an approach for scanning buildings, processing the point cloud raw data, and a modelling approach for CAD extraction and building objects classification by a pattern matching approach in IFC (Industry Foundation Classes) format. The approach presented in this paper from an undertaken research can lead to parametric design and Building Information Modelling (BIM) for existing structures. Two case studies are introduced to demonstrate the use of laser scanner technology in the Built Environment. These case studies are the Jactin House Building in East Manchester and the Peel building in the campus of University Salford. Through these case studies, while use of laser scanners are explained, the integration of it with various technologies and systems are also explored for professionals in Built Environmen

    Overcoming the Challenges Associated with Image-based Mapping of Small Bodies in Preparation for the OSIRIS-REx Mission to (101955) Bennu

    Get PDF
    The OSIRIS-REx Asteroid Sample Return Mission is the third mission in NASA's New Frontiers Program and is the first U.S. mission to return samples from an asteroid to Earth. The most important decision ahead of the OSIRIS-REx team is the selection of a prime sample-site on the surface of asteroid (101955) Bennu. Mission success hinges on identifying a site that is safe and has regolith that can readily be ingested by the spacecraft's sampling mechanism. To inform this mission-critical decision, the surface of Bennu is mapped using the OSIRIS-REx Camera Suite and the images are used to develop several foundational data products. Acquiring the necessary inputs to these data products requires observational strategies that are defined specifically to overcome the challenges associated with mapping a small irregular body. We present these strategies in the context of assessing candidate sample-sites at Bennu according to a framework of decisions regarding the relative safety, sampleability, and scientific value across the asteroid's surface. To create data products that aid these assessments, we describe the best practices developed by the OSIRIS-REx team for image-based mapping of irregular small bodies. We emphasize the importance of using 3D shape models and the ability to work in body-fixed rectangular coordinates when dealing with planetary surfaces that cannot be uniquely addressed by body-fixed latitude and longitude.Comment: 31 pages, 10 figures, 2 table

    A preliminary experiment definition for video landmark acquisition and tracking

    Get PDF
    Six scientific objectives/experiments were derived which consisted of agriculture/forestry/range resources, land use, geology/mineral resources, water resources, marine resources and environmental surveys. Computer calculations were then made of the spectral radiance signature of each of 25 candidate targets as seen by a satellite sensor system. An imaging system capable of recognizing, acquiring and tracking specific generic type surface features was defined. A preliminary experiment definition and design of a video Landmark Acquisition and Tracking system is given. This device will search a 10-mile swath while orbiting the earth, looking for land/water interfaces such as coastlines and rivers
    corecore