1,019 research outputs found

    Combining Multiple Algorithms for Road Network Tracking from Multiple Source Remotely Sensed Imagery: a Practical System and Performance Evaluation

    Get PDF
    In light of the increasing availability of commercial high-resolution imaging sensors, automatic interpretation tools are needed to extract road features. Currently, many approaches for road extraction are available, but it is acknowledged that there is no single method that would be successful in extracting all types of roads from any remotely sensed imagery. In this paper, a novel classification of roads is proposed, based on both the roads' geometrical, radiometric properties and the characteristics of the sensors. Subsequently, a general road tracking framework is proposed, and one or more suitable road trackers are designed or combined for each type of roads. Extensive experiments are performed to extract roads from aerial/satellite imagery, and the results show that a combination strategy can automatically extract more than 60% of the total roads from very high resolution imagery such as QuickBird and DMC images, with a time-saving of approximately 20%, and acceptable spatial accuracy. It is proven that a combination of multiple algorithms is more reliable, more efficient and more robust for extracting road networks from multiple-source remotely sensed imagery than the individual algorithms

    Airport Detection in SAR Images via Salient Line Segment Detector and Edge-Oriented Region Growing

    Get PDF
    Airport detection in synthetic aperture radar (SAR) images has attracted much concern in the field of remote sensing. Affected by other salient objects with geometrical features similar to those of airports, traditional methods often generate false detections. In order to produce the geometrical features of airports and suppress the influence of irrelevant objects, we propose a novel method for airport detection in SAR images. First, a salient line segment detector is constructed to extract salient line segments in the SAR images. Second, we obtain the airport support regions by grouping these line segments according to the commonality of these geometrical features. Finally, we design an edge-oriented region growing (EORG) algorithm, where growing seeds are selected from the airport support regions with the help of edge information in SAR images. Using EORG, the airport region can be mapped by performing region growing with these seeds. We implement experiments on real radar images to validate the effectiveness of our method. The experimental results demonstrate that our method can acquire more accurate locations and contours of airports than several state-of-the-art airport detection algorithms

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability

    Road Feature Extraction from High Resolution Aerial Images Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

    Get PDF
    Accurate, detailed and up-to-date road information is of special importance in geo-spatial databases as it is used in a variety of applications such as vehicle navigation, traffic management and advanced driver assistance systems (ADAS). The commercial road maps utilized for road navigation or the geographical information system (GIS) today are based on linear road centrelines represented in vector format with poly-lines (i.e., series of nodes and shape points, connected by segments), which present a serious lack of accuracy, contents, and completeness for their applicability at the sub-road level. For instance, the accuracy level of the present standard maps is around 5 to 20 meters. The roads/streets in the digital maps are represented as line segments rendered using different colours and widths. However, the widths of line segments do not necessarily represent the actual road widths accurately. Another problem with the existing road maps is that few precise sub-road details, such as lane markings and stop lines, are included, whereas such sub-road information is crucial for applications such as lane departure warning or lane-based vehicle navigation. Furthermore, the vast majority of roadmaps aremodelled in 2D space, whichmeans that some complex road scenes, such as overpasses and multi-level road systems, cannot be effectively represented. In addition, the lack of elevation information makes it infeasible to carry out applications such as driving simulation and 3D vehicle navigation

    Detection and Identification of Camouflaged Targets using Hyperspectral and LiDAR data

    Get PDF
    Camouflaging is the process of merging the target with the background with the aim to reduce/delay its detection. It can be done using different materials/methods such as camouflaging nets, paints. Defence applications often require quick detection of camouflaged targets in a dynamic battlefield scenario. Though HSI data may facilitate detection of camouflaged targets but detection gets complicated due to issues (spectral variability, dimensionality). This paper presents a framework for detection of camouflaged target that allows military analysts to coordinate and utilise the expert knowledge for resolving camouflaged targets using remotely sensed data. Desired camouflaged target (set of three chairs as a target under a camouflaging net) has been resolved in three steps: First, hyperspectral data processing helps to detect the locations of potential camouflaged targets. It narrows down the location of the potential camouflaged targets by detecting camouflaging net using Independent component analysis and spectral matching algorithms. Second, detection and identification have been performed using LiDAR point cloud classification and morphological analysis. HSI processing helps to discard the redundant majority of LiDAR point clouds and support detailed analysis of only the minute portion of the point cloud data the system deems relevant. This facilitates extraction of salient features of the potential camouflaged target. Lastly, the decisions obtained have been fused to infer the identity of the desired targets. The experimental results indicate that the proposed approach may be used to successfully resolve camouflaged target assuming some a priori knowledge about the morphology of targets likely to be present.

    Very High Resolution (VHR) Satellite Imagery: Processing and Applications

    Get PDF
    Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing

    Validating Multimedia Content Moderation Software via Semantic Fusion

    Full text link
    The exponential growth of social media platforms, such as Facebook and TikTok, has revolutionized communication and content publication in human society. Users on these platforms can publish multimedia content that delivers information via the combination of text, audio, images, and video. Meanwhile, the multimedia content release facility has been increasingly exploited to propagate toxic content, such as hate speech, malicious advertisements, and pornography. To this end, content moderation software has been widely deployed on these platforms to detect and blocks toxic content. However, due to the complexity of content moderation models and the difficulty of understanding information across multiple modalities, existing content moderation software can fail to detect toxic content, which often leads to extremely negative impacts. We introduce Semantic Fusion, a general, effective methodology for validating multimedia content moderation software. Our key idea is to fuse two or more existing single-modal inputs (e.g., a textual sentence and an image) into a new input that combines the semantics of its ancestors in a novel manner and has toxic nature by construction. This fused input is then used for validating multimedia content moderation software. We realized Semantic Fusion as DUO, a practical content moderation software testing tool. In our evaluation, we employ DUO to test five commercial content moderation software and two state-of-the-art models against three kinds of toxic content. The results show that DUO achieves up to 100% error finding rate (EFR) when testing moderation software. In addition, we leverage the test cases generated by DUO to retrain the two models we explored, which largely improves model robustness while maintaining the accuracy on the original test set.Comment: Accepted by ISSTA 202
    • …
    corecore