2,141 research outputs found

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability

    Naval Target Classification by Fusion of Multiple Imaging Sensors Based on the Confusion Matrix

    Get PDF
    This paper presents an algorithm for the classification of targets based on the fusion of the class information provided by different imaging sensors. The outputs of the different sensors are combined to obtain an accurate estimate of the target class. The performance of each imaging sensor is modelled by means of its confusion matrix (CM), whose elements are the conditional error probabilities in the classification and the conditional correct classification probabilities. These probabilities are used by each sensor to make a decision on the target class. Then, a final decision on the class is made using a suitable fusion rule in order to combine the local decisions provided by the sensors. The overall performance of the classification process is evaluated by means of the "fused" confusion matrix, i.e. the CM pertinent to the final decision on the target class. Two fusion rules are considered: a majority voting (MV) rule and a maximum likelihood (ML) rule. A case study is then presented, where the developed algorithm is applied to three imaging sensors located on a generic air platform: a video camera, an infrared camera (IR), and a spotlight Synthetic Aperture Radar (SAR)

    Basics of Geomatics

    Full text link

    Sea Ice Extraction via Remote Sensed Imagery: Algorithms, Datasets, Applications and Challenges

    Full text link
    The deep learning, which is a dominating technique in artificial intelligence, has completely changed the image understanding over the past decade. As a consequence, the sea ice extraction (SIE) problem has reached a new era. We present a comprehensive review of four important aspects of SIE, including algorithms, datasets, applications, and the future trends. Our review focuses on researches published from 2016 to the present, with a specific focus on deep learning-based approaches in the last five years. We divided all relegated algorithms into 3 categories, including classical image segmentation approach, machine learning-based approach and deep learning-based methods. We reviewed the accessible ice datasets including SAR-based datasets, the optical-based datasets and others. The applications are presented in 4 aspects including climate research, navigation, geographic information systems (GIS) production and others. It also provides insightful observations and inspiring future research directions.Comment: 24 pages, 6 figure

    Aerial Map-Based Navigation Using Semantic Segmentation and Pattern Matching

    Full text link
    This paper proposes a novel approach to map-based navigation system for unmanned aircraft. The proposed system attempts label-to-label matching, not image-to-image matching between aerial images and a map database. By using semantic segmentation, the ground objects are labelled and the configuration of the objects is used to find the corresponding location in the map database. The use of the deep learning technique as a tool for extracting high-level features reduces the image-based localization problem to a pattern matching problem. This paper proposes a pattern matching algorithm which does not require altitude information or a camera model to estimate the absolute horizontal position. The feasibility analysis with simulated images shows the proposed map-based navigation can be realized with the proposed pattern matching algorithm and it is able to provide positions given the labelled objects.Comment: 6 pages, 4 figure

    Performance of Unsupervised Change Detection Method Based on PSO and K-means Clustering for SAR Images

    Get PDF
    This paper presents unsupervised change detection method to produce more accurate change map from imbalanced SAR images for the same land cover. This method is based on PSO algorithm for image segmentation to layers which classify by Gabor Wavelet filter and then K-means clustering to generate new change map. Tests are confirming the effectiveness and efficiency by comparison obtained results with the results of the other methods. Integration of PSO with Gabor filter and k-means will providing more and more accuracy to detect a least changing in objects and terrain of SAR image, as well as reduce the processing time

    Autonomous Navigation for Mars Exploration

    Get PDF
    The autonomous navigation technology uses the multiple sensors to percept and estimate the spatial locations of the aerospace prober or the Mars rover and to guide their motions in the orbit or the Mars surface. In this chapter, the autonomous navigation methods for the Mars exploration are reviewed. First, the current development status of the autonomous navigation technology is summarized. The popular autonomous navigation methods, such as the inertial navigation, the celestial navigation, the visual navigation, and the integrated navigation, are introduced. Second, the application of the autonomous navigation technology for the Mars exploration is presented. The corresponding issues in the Entry Descent and Landing (EDL) phase and the Mars surface roving phase are mainly discussed. Third, some challenges and development trends of the autonomous navigation technology are also addressed

    Remote Sensing methods for power line corridor surveys

    Get PDF
    AbstractTo secure uninterrupted distribution of electricity, effective monitoring and maintenance of power lines are needed. This literature review article aims to give a wide overview of the possibilities provided by modern remote sensing sensors in power line corridor surveys and to discuss the potential and limitations of different approaches. Monitoring of both power line components and vegetation around them is included. Remotely sensed data sources discussed in the review include synthetic aperture radar (SAR) images, optical satellite and aerial images, thermal images, airborne laser scanner (ALS) data, land-based mobile mapping data, and unmanned aerial vehicle (UAV) data. The review shows that most previous studies have concentrated on the mapping and analysis of network components. In particular, automated extraction of power line conductors has achieved much attention, and promising results have been reported. For example, accuracy levels above 90% have been presented for the extraction of conductors from ALS data or aerial images. However, in many studies datasets have been small and numerical quality analyses have been omitted. Mapping of vegetation near power lines has been a less common research topic than mapping of the components, but several studies have also been carried out in this field, especially using optical aerial and satellite images. Based on the review we conclude that in future research more attention should be given to an integrated use of various data sources to benefit from the various techniques in an optimal way. Knowledge in related fields, such as vegetation monitoring from ALS, SAR and optical image data should be better exploited to develop useful monitoring approaches. Special attention should be given to rapidly developing remote sensing techniques such as UAVs and laser scanning from airborne and land-based platforms. To demonstrate and verify the capabilities of automated monitoring approaches, large tests in various environments and practical monitoring conditions are needed. These should include careful quality analyses and comparisons between different data sources, methods and individual algorithms
    corecore