151 research outputs found

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Remote Sensing for Non‐Technical Survey

    Get PDF
    This chapter describes the research activities of the Royal Military Academy on remote sensing applied to mine action. Remote sensing can be used to detect specific features that could lead to the suspicion of the presence, or absence, of mines. Work on the automatic detection of trenches and craters is presented here. Land cover can be extracted and is quite useful to help mine action. We present here a classification method based on Gabor filters. The relief of a region helps analysts to understand where mines could have been laid. Methods to be a digital terrain model from a digital surface model are explained. The special case of multi‐spectral classification is also addressed in this chapter. Discussion about data fusion is also given. Hyper‐spectral data are also addressed with a change detection method. Synthetic aperture radar data and its fusion with optical data have been studied. Radar interferometry and polarimetry are also addressed

    Unsupervised Classification of Polarimetric SAR Images via Riemannian Sparse Coding

    Get PDF
    Unsupervised classification plays an important role in understanding polarimetric synthetic aperture radar (PolSAR) images. One of the typical representations of PolSAR data is in the form of Hermitian positive definite (HPD) covariance matrices. Most algorithms for unsupervised classification using this representation either use statistical distribution models or adopt polarimetric target decompositions. In this paper, we propose an unsupervised classification method by introducing a sparsity-based similarity measure on HPD matrices. Specifically, we first use a novel Riemannian sparse coding scheme for representing each HPD covariance matrix as sparse linear combinations of other HPD matrices, where the sparse reconstruction loss is defined by the Riemannian geodesic distance between HPD matrices. The coefficient vectors generated by this step reflect the neighborhood structure of HPD matrices embedded in the Euclidean space and hence can be used to define a similarity measure. We apply the scheme for PolSAR data, in which we first oversegment the images into superpixels, followed by representing each superpixel by an HPD matrix. These HPD matrices are then sparse coded, and the resulting sparse coefficient vectors are then clustered by spectral clustering using the neighborhood matrix generated by our similarity measure. The experimental results on different fully PolSAR images demonstrate the superior performance of the proposed classification approach against the state-of-the-art approachesThis work was supported in part by the National Natural Science Foundation of China under Grant 61331016 and Grant 61271401 and in part by the National Key Basic Research and Development Program of China under Contract 2013CB733404. The work of A. Cherian was supported by the Australian Research Council Centre of Excellence for Robotic Vision under Project CE140100016.

    Analytic Expressions for Stochastic Distances Between Relaxed Complex Wishart Distributions

    Full text link
    The scaled complex Wishart distribution is a widely used model for multilook full polarimetric SAR data whose adequacy has been attested in the literature. Classification, segmentation, and image analysis techniques which depend on this model have been devised, and many of them employ some type of dissimilarity measure. In this paper we derive analytic expressions for four stochastic distances between relaxed scaled complex Wishart distributions in their most general form and in important particular cases. Using these distances, inequalities are obtained which lead to new ways of deriving the Bartlett and revised Wishart distances. The expressiveness of the four analytic distances is assessed with respect to the variation of parameters. Such distances are then used for deriving new tests statistics, which are proved to have asymptotic chi-square distribution. Adopting the test size as a comparison criterion, a sensitivity study is performed by means of Monte Carlo experiments suggesting that the Bhattacharyya statistic outperforms all the others. The power of the tests is also assessed. Applications to actual data illustrate the discrimination and homogeneity identification capabilities of these distances.Comment: Accepted for publication in the IEEE Transactions on Geoscience and Remote Sensing journa

    Spatial-Spectral Manifold Embedding of Hyperspectral Data

    Get PDF
    In recent years, hyperspectral imaging, also known as imaging spectroscopy, has been paid an increasing interest in geoscience and remote sensing community. Hyperspectral imagery is characterized by very rich spectral information, which enables us to recognize the materials of interest lying on the surface of the Earth more easier. We have to admit, however, that high spectral dimension inevitably brings some drawbacks, such as expensive data storage and transmission, information redundancy, etc. Therefore, to reduce the spectral dimensionality effectively and learn more discriminative spectral low-dimensional embedding, in this paper we propose a novel hyperspectral embedding approach by simultaneously considering spatial and spectral information, called spatial-spectral manifold embedding (SSME). Beyond the pixel-wise spectral embedding approaches, SSME models the spatial and spectral information jointly in a patch-based fashion. SSME not only learns the spectral embedding by using the adjacency matrix obtained by similarity measurement between spectral signatures, but also models the spatial neighbours of a target pixel in hyperspectral scene by sharing the same weights (or edges) in the process of learning embedding. Classification is explored as a potential strategy to quantitatively evaluate the performance of learned embedding representations. Classification is explored as a potential application for quantitatively evaluating the performance of these hyperspectral embedding algorithms. Extensive experiments conducted on the widely-used hyperspectral datasets demonstrate the superiority and effectiveness of the proposed SSME as compared to several state-of-the-art embedding methods

    Unsupervised Single-Scene Semantic Segmentation for Earth Observation

    Get PDF
    Earth observation data have huge potential to enrich our knowledge about our planet. An important step in many Earth observation tasks is semantic segmentation. Generally, a large number of pixelwise labeled images are required to train deep models for supervised semantic segmentation. On the contrary, strong intersensor and geographic variations impede the availability of annotated training data in Earth observation. In practice, most Earth observation tasks use only the target scene without assuming availability of any additional scene, labeled or unlabeled. Keeping in mind such constraints, we propose a semantic segmentation method that learns to segment from a single scene, without using any annotation. Earth observation scenes are generally larger than those encountered in typical computer vision datasets. Exploiting this, the proposed method samples smaller unlabeled patches from the scene. For each patch, an alternate view is generated by simple transformations, e.g., addition of noise. Both views are then processed through a two-stream network and weights are iteratively refined using deep clustering, spatial consistency, and contrastive learning in the pixel space. The proposed model automatically segregates the major classes present in the scene and produces the segmentation map. Extensive experiments on four Earth observation datasets collected by different sensors show the effectiveness of the proposed method. Implementation is available at https://gitlab.lrz.de/ai4eo/cd/-/tree/main/unsupContrastiveSemanticSeg
    corecore