351 research outputs found

    Novel Multi-Scale Filter Profile-Based Framework for VHR Remote Sensing Image Classification

    Get PDF
    Publisher's version (Ăștgefin grein).Filter is a well-known tool for noise reduction of very high spatial resolution (VHR) remote sensing images. However, a single-scale filter usually demonstrates limitations in covering various targets with different sizes and shapes in a given image scene. A novel method called multi-scale filter profile (MFP)-based framework (MFPF) is introduced in this study to improve the classification performance of a remote sensing image of VHR and address the aforementioned problem. First, an adaptive filter is extended with a series of parameters for MFP construction. Then, a layer-stacking technique is used to concatenate the MPFs and all the features into a stacked vector. Afterward, principal component analysis, a classical descending dimension algorithm, is performed on the fused profiles to reduce the redundancy of the stacked vector. Finally, the spatial adaptive region of each filter in the MFPs is used for post-processing of the obtained initial classification map through a supervised classifier. This process aims to revise the initial classification map and generate a final classification map. Experimental results performed on the three real VHR remote sensing images demonstrate the effectiveness of the proposed MFPF in comparison with the state-of-the-art methods. Hard-tuning parameters are unnecessary in the application of the proposed approach. Thus, such a method can be conveniently applied in real applications.This research was funded by the National Science Foundation China (61701396 and 41501378) and the Natural Science Foundation of Shaan Xi Province (2018JQ4009).Peer Reviewe

    A novel feature fusion approach for VHR remote sensing image classification

    Get PDF
    6openInternationalInternational coauthor/editorThis article develops a robust feature fusion approach to enhance the classification performance of very high resolution (VHR) remote sensing images. Specifically, a novel two-stage multiple feature fusion (TsF) approach is proposed, which includes an intragroup and an intergroup feature fusion stages. In the first fusion stage, multiple features are grouped by clustering, where redundant information between different types of features is eliminated within each group. Then, features are pairwisely fused in an intergroup fusion model based on the guided filtering method. Finally, the fused feature set is imported into a classifier to generate the classification map. In this work, the original VHR spectral bands and their attribute profiles are taken as examples as input spectral and spatial features, respectively, in order to test the performance of the proposed TsF approach. Experimental results obtained on two QuickBird datasets covering complex urban scenarios demonstrate the effectiveness of the proposed approach in terms of generation of more discriminative fusion features and enhancing classification performance. More importantly, the fused feature dimensionality is limited at a certain level; thus, the computational cost will not be significantly increased even if multiple features are considered.openLiu, S.; Zheng, Y.; Du, Q.; Samat, A.; Tong, X.; Dalponte, M.Liu, S.; Zheng, Y.; Du, Q.; Samat, A.; Tong, X.; Dalponte, M

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Combining Multiple Algorithms for Road Network Tracking from Multiple Source Remotely Sensed Imagery: a Practical System and Performance Evaluation

    Get PDF
    In light of the increasing availability of commercial high-resolution imaging sensors, automatic interpretation tools are needed to extract road features. Currently, many approaches for road extraction are available, but it is acknowledged that there is no single method that would be successful in extracting all types of roads from any remotely sensed imagery. In this paper, a novel classification of roads is proposed, based on both the roads' geometrical, radiometric properties and the characteristics of the sensors. Subsequently, a general road tracking framework is proposed, and one or more suitable road trackers are designed or combined for each type of roads. Extensive experiments are performed to extract roads from aerial/satellite imagery, and the results show that a combination strategy can automatically extract more than 60% of the total roads from very high resolution imagery such as QuickBird and DMC images, with a time-saving of approximately 20%, and acceptable spatial accuracy. It is proven that a combination of multiple algorithms is more reliable, more efficient and more robust for extracting road networks from multiple-source remotely sensed imagery than the individual algorithms

    Advanced Techniques based on Mathematical Morphology for the Analysis of Remote Sensing Images

    Get PDF
    Remote sensing optical images of very high geometrical resolution can provide a precise and detailed representation of the surveyed scene. Thus, the spatial information contained in these images is fundamental for any application requiring the analysis of the image. However, modeling the spatial information is not a trivial task. We addressed this problem by using operators defined in the mathematical morphology framework in order to extract spatial features from the image. In this thesis novel techniques based on mathematical morphology are presented and investigated for the analysis of remote sensing optical images addressing different applications. Attribute Profiles (APs) are proposed as a novel generalization based on attribute filters of the Morphological Profile operator. Attribute filters are connected operators which can process an image by removing flat zones according to a given criterion. They are flexible operators since they can transform an image according to many different attributes (e.g., geometrical, textural and spectral). Furthermore, Extended Attribute Profiles (EAPs), a generalization of APs, are presented for the analysis of hyperspectral images. The EAPs are employed for including spatial features in the thematic classification of hyperspectral images. Two techniques dealing with EAPs and dimensionality reduction transformations are proposed and applied in image classification. In greater detail, one of the techniques is based on Independent Component Analysis and the other one deals with feature extraction techniques. Moreover, a technique based on APs for extracting features for the detection of buildings in a scene is investigated. Approaches that process an image by considering both bright and dark components of a scene are investigated. In particular, the effect of applying attribute filters in an alternating sequential setting is investigated. Furthermore, the concept of Self-Dual Attribute Profile (SDAP) is introduced. SDAPs are APs built on an inclusion tree instead of a min- and max-tree, providing an operator that performs a multilevel filtering of both the bright and dark components of an image. Techniques developed for applications different from image classification are also considered. In greater detail, a general approach for image simplification based on attribute filters is proposed. Finally, two change detection techniques are developed. The experimental analysis performed with the novel techniques developed in this thesis demonstrates an improvement in terms of accuracies in different fields of application when compared to other state of the art methods

    Buildings Detection in VHR SAR Images Using Fully Convolution Neural Networks

    Get PDF
    This paper addresses the highly challenging problem of automatically detecting man-made structures especially buildings in very high resolution (VHR) synthetic aperture radar (SAR) images. In this context, the paper has two major contributions: Firstly, it presents a novel and generic workflow that initially classifies the spaceborne TomoSAR point clouds − - generated by processing VHR SAR image stacks using advanced interferometric techniques known as SAR tomography (TomoSAR) − - into buildings and non-buildings with the aid of auxiliary information (i.e., either using openly available 2-D building footprints or adopting an optical image classification scheme) and later back project the extracted building points onto the SAR imaging coordinates to produce automatic large-scale benchmark labelled (buildings/non-buildings) SAR datasets. Secondly, these labelled datasets (i.e., building masks) have been utilized to construct and train the state-of-the-art deep Fully Convolution Neural Networks with an additional Conditional Random Field represented as a Recurrent Neural Network to detect building regions in a single VHR SAR image. Such a cascaded formation has been successfully employed in computer vision and remote sensing fields for optical image classification but, to our knowledge, has not been applied to SAR images. The results of the building detection are illustrated and validated over a TerraSAR-X VHR spotlight SAR image covering approximately 39 km2 ^2 − - almost the whole city of Berlin − - with mean pixel accuracies of around 93.84%Comment: Accepted publication in IEEE TGR

    Automatic Update of Airport GIS by Remote Sensing Image Analysis

    Get PDF
    This project investigates ways to automatically update Geographic Information Systems (GIS) for airports by analysis of Very High Resolution (VHR) remote sensing images. These GIS databases map the physical layout of an airport by representing a broad range of features (such as runways, taxiways and roads) as georeferenced vector objects. Updating such systems therefore involves both automatic detection of relevant objects from remotely sensed images, and comparison of these objects between bi-temporal images. The size of the VHR images and the diversity of the object types to be captured in the GIS databases makes this a very large and complex problem. Therefore we must split it into smaller parts which can be framed as instances of image processing problems. The aim of this project is to apply a range of methodologies to these problems and compare their results, providing quantitative data where possible. In this report, we devote a chapter to each sub-problem that was focussed on. Chapter 1 begins by introducing the background and motivation of the project, and describes the problem in more detail. Chapter 2 presents a method for detecting and segmenting runways, by detecting their distinctive markings and feeding them into a modified Hough transform. The algorithm was tested on a dataset of six bi-temporal remote sensing image pairs and validated against manually generated ground-truth GIS data, provided by Jeppesen. Chapter 3 investigates co-registration of bi-temporal images, as a necessary precursor to most direct change detection algorithms. Chapter 4 then tests a range of bi-temporal change detection algorithms (some standard, some novel) on co-registered images of airports, with the aim of producing a change heat-map which may assist a human operator in rapidly focussing attention on areas that have changed significantly. Chapter 5 explores a number of approaches to detecting curvilinear AMDB features such as taxilines and stopbars, by means of enhancing such features and suppressing others, prior to thresholding. Finally in Chapter 6 we develop a method for distinguishing between AMDB lines and other curvilinear structures that may occur in an image, by analysing the connectivity between such features and the runways

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor
    • 

    corecore