1,993 research outputs found

    Advances in Hyperspectral Image Classification: Earth monitoring with statistical learning methods

    Full text link
    Hyperspectral images show similar statistical properties to natural grayscale or color photographic images. However, the classification of hyperspectral images is more challenging because of the very high dimensionality of the pixels and the small number of labeled examples typically available for learning. These peculiarities lead to particular signal processing problems, mainly characterized by indetermination and complex manifolds. The framework of statistical learning has gained popularity in the last decade. New methods have been presented to account for the spatial homogeneity of images, to include user's interaction via active learning, to take advantage of the manifold structure with semisupervised learning, to extract and encode invariances, or to adapt classifiers and image representations to unseen yet similar scenes. This tutuorial reviews the main advances for hyperspectral remote sensing image classification through illustrative examples.Comment: IEEE Signal Processing Magazine, 201

    More Diverse Means Better: Multimodal Deep Learning Meets Remote Sensing Imagery Classification

    Full text link
    Classification and identification of the materials lying over or beneath the Earth's surface have long been a fundamental but challenging research topic in geoscience and remote sensing (RS) and have garnered a growing concern owing to the recent advancements of deep learning techniques. Although deep networks have been successfully applied in single-modality-dominated classification tasks, yet their performance inevitably meets the bottleneck in complex scenes that need to be finely classified, due to the limitation of information diversity. In this work, we provide a baseline solution to the aforementioned difficulty by developing a general multimodal deep learning (MDL) framework. In particular, we also investigate a special case of multi-modality learning (MML) -- cross-modality learning (CML) that exists widely in RS image classification applications. By focusing on "what", "where", and "how" to fuse, we show different fusion strategies as well as how to train deep networks and build the network architecture. Specifically, five fusion architectures are introduced and developed, further being unified in our MDL framework. More significantly, our framework is not only limited to pixel-wise classification tasks but also applicable to spatial information modeling with convolutional neural networks (CNNs). To validate the effectiveness and superiority of the MDL framework, extensive experiments related to the settings of MML and CML are conducted on two different multimodal RS datasets. Furthermore, the codes and datasets will be available at https://github.com/danfenghong/IEEE_TGRS_MDL-RS, contributing to the RS community

    Analysis of Features for Synthetic Aperture Radar Target Classification

    Get PDF
    Considering two classes of vehicles, we aim to identify the physical elements of the vehicles with the most impact on identifying the class of the vehicle in synthetic aperture radar (SAR) images. We classify vehicles using features, from polarimetric SAR images, corresponding to the structure of physical elements. We demonstrate a method which determines the most impactful features to classification by applying subset selection on the features. Determination of the most impactful elements of the vehicles is beneficial to the development of low observables, target models, and automatic target recognition (ATR) algorithms. We show how previous work with features from individual pixels is applied to a greater number of target states. At a greater number of target states, the previous work has poor classification performance. Additionally, the nature of the features from pixels limits the identification of the most impactful elements of vehicles. We apply concepts from optical sensing to reduce the limitation on identification of physical elements. We draw from optical sensing feature extraction with the use of Histogram of Oriented Gradients (HOG). From the cells of HOG, we form features from frequency and polarization attributes of SAR images. Using a subset set of features, we achieve a classification performance of 96.10 percent correct classification. Using the features from HOG and the cells, we identify the features with the most impact. Using backward selection, a process for subset selection, we identify the features with the most impact to classification. The execution of backward selection removes the features which induce the most error

    Linear vs Nonlinear Extreme Learning Machine for Spectral-Spatial Classification of Hyperspectral Image

    Get PDF
    As a new machine learning approach, extreme learning machine (ELM) has received wide attentions due to its good performances. However, when directly applied to the hyperspectral image (HSI) classification, the recognition rate is too low. This is because ELM does not use the spatial information which is very important for HSI classification. In view of this, this paper proposes a new framework for spectral-spatial classification of HSI by combining ELM with loopy belief propagation (LBP). The original ELM is linear, and the nonlinear ELMs (or Kernel ELMs) are the improvement of linear ELM (LELM). However, based on lots of experiments and analysis, we found out that the LELM is a better choice than nonlinear ELM for spectral-spatial classification of HSI. Furthermore, we exploit the marginal probability distribution that uses the whole information in the HSI and learn such distribution using the LBP. The proposed method not only maintain the fast speed of ELM, but also greatly improves the accuracy of classification. The experimental results in the well-known HSI data sets, Indian Pines and Pavia University, demonstrate the good performances of the proposed method.Comment: 13 pages,8 figures,3 tables,articl

    Historical forest biomass dynamics modelled with Landsat spectral trajectories

    Get PDF
    Acknowledgements National Forest Inventory data are available online, provided by Ministerio de Agricultura, Alimentación y Medio Ambiente (España). Landsat images are available online, provided by the USGS.Peer reviewedPostprin

    A novel spectral-spatial singular spectrum analysis technique for near real-time in-situ feature extraction in hyperspectral imaging.

    Get PDF
    As a cutting-edge technique for denoising and feature extraction, singular spectrum analysis (SSA) has been applied successfully for feature mining in hyperspectral images (HSI). However, when applying SSA for in situ feature extraction in HSI, conventional pixel-based 1-D SSA fails to produce satisfactory results, while the band-image-based 2D-SSA is also infeasible especially for the popularly used line-scan mode. To tackle these challenges, in this article, a novel 1.5D-SSA approach is proposed for in situ spectral-spatial feature extraction in HSI, where pixels from a small window are used as spatial information. For each sequentially acquired pixel, similar pixels are located from a window centered at the pixel to form an extended trajectory matrix for feature extraction. Classification results on two well-known benchmark HSI datasets and an actual urban scene dataset have demonstrated that the proposed 1.5D-SSA achieves the superior performance compared with several state-of-the-art spectral and spatial methods. In addition, the near real-time implementation in aligning to the HSI acquisition process can meet the requirement of online image analysis for more efficient feature extraction than the conventional offline workflow
    corecore